A global group of investors with more than $6 trillion in assets have sent letters calling on the 26 tech and telecom companies we ranked in the last RDR Corporate Accountability Index to commit to our core recommendations. We push companies to:

  • implement robust human rights governance
  • maximize transparency on how policies are implemented
  • give users meaningful control over their data and data inferred about them
  • account for harms that stem from algorithms and targeted advertising

Coordinated by the Investor Alliance for Human Rights, the campaign comprises nearly 80 investment groups who are applying pressure on technology companies to resolve these long-standing, systemic issues. The significant increase in support for this statement relative to previous years signals an increased desire among investors for good corporate governance and respect for human rights within the tech sector. The investor groups urged companies to implement key corporate governance measures that we at RDR have long pushed for, including strengthened oversight structures to prevent companies from causing or enabling human rights violations.

Ranking Digital Rights is proud to support the Investor Statement on Corporate Accountability for Digital Rights and investors’ direct engagement with some of the largest ICT companies in the world. Through our annual Corporate Accountability Index, we equip investors and advocates alike with the data and analysis they need to draft and promote shareholder resolutions that put human rights first.

Read the Investor Statement

We invite investors and asset managers seeking guidance on the human rights risks of technology companies to reach out to us at investors@rankingdigitalrights.org.

London street art. Photo by Annie Spratt. Licensed for non-commercial reuse by Unsplash.

Written and compiled by Alex Rochefort, Zak Rogoff, and RDR staff.

The revelations of Facebook whistleblower Frances Haugen, published in SEC filings and in the Wall Street Journal’s “Facebook Files” series, have brought forth irrefutable evidence that Facebook has repeatedly misled or lied to the public, and that it routinely breaks its own rules, especially outside the U.S.

Corroborating years of accusations and grievances from global civil society, the revelations beg the question: What do Facebook’s policies really tell us about how the platform works? The documents offer us a rare opportunity to cross-check the company’s public commitments against its actual practices—and our own research findings of the past six years.

 

How does Facebook handle hate speech?

What Facebook says publicly:

In 2020, Facebook claimed that it proactively removes 95% of posts that its systems identify as hate speech. The remaining 5% are flagged by users and removed on review by moderators.

What the Facebook files prove:

Facebook estimates that it takes action on “as little as 3-5% of hate speech” because of limitations in its automated and human content moderation practices. The company does not remove “95% of hate speech” that violates its policies.

What we know:

While not technically contradictory, these statements are emblematic of a longstanding strategy by Facebook to obfuscate and omit information in transparency reports and other public statements. These statements reinforce what we’ve found in our research. While we have noted that Facebook’s policies clearly outline what content is prohibited and how it enforces its rules, the company does not publish data to corroborate this. Without this data, it is impossible for researchers to verify that the company does what it says it will do.

Our most recent Facebook company report card highlights the company’s failure to be fully transparent about its content moderation practices. Carrying out content moderation at scale is a complex challenge. But providing more transparency about content moderation practices is not. See our 2020 data on transparency reporting.

 

How does Facebook handle policy enforcement when it comes to human rights violations around the world?

What Facebook says publicly: The company says it takes seriously its role as a communication service for the global community. In a 2020 Senate hearing CEO Mark Zuckerberg noted that the company’s products “enabled more than 3 billion people around the world to share ideas, offer support, and discuss important issues” and reaffirmed a commitment to keeping users safe.

What the Facebook files prove: Facebook allocates 87% of its budget for combating misinformation to issues and users based in the U.S., even though these users make up just about 10% of the platform’s daily active users. These policy choices have exacerbated the spread of hate speech and misinformation in non-Western countries, undermined efforts to moderate content in regions where internal conflict and political instability are high, and contributed to the spread of offline harm and ethnic violence.

What we know: The Haugen revelations corroborate what civil society and human rights activists have been calling attention to for years—Facebook is insufficiently committed to protecting its non-Western users. Across the Global South, the company has been unable—or unwilling—to adequately assess human rights risks or take appropriate actions to protect users from harm. This is especially concerning in countries where Facebook has a de facto monopoly on communications services thanks to its zero-rating practices.

In our 2020 research, Facebook was weak on human rights due diligence, and failed to show clear evidence that it conducts systematic impact assessments of its algorithmic systems, ad targeting practices, or processes for enforcing its Community Standards. The company often points to its extensive Community Standards as evidence that it takes seriously its responsibility to protect people from harm. But we now have proof that these standards are selectively enforced, in ways that reinforce existing structures of power, privilege, and oppression. See our 2020 data on human rights impact assessments for algorithmic systems and zero-rating.

 

How does Facebook handle policy enforcement for high-profile politicians and celebrities?

What Facebook says publicly: Facebook has wavered on the question of whether and how to treat speech coming from high-profile public figures, citing exceptions to its typical content rules on the basis of “newsworthiness.” But in June 2021, the company said that it had reined in these exceptions at the recommendation of the Facebook Oversight Board. A blog post about the shift asserted: “we do not presume that any person’s speech is inherently newsworthy, including by politicians.”

What the Facebook files prove: Facebook maintains a special program, known as XCheck (or “cross-check”) that exempts high-profile users, such as politicians and celebrities, from the platform’s content rules. A confidential internal review of the program stated the following: “We are not actually doing what we say we do publicly….Unlike the rest of our community, these people can violate our standards without any consequences.”

What we know: We know that speech coming from high-profile people, especially heads of state, can have a significant impact on what people believe is true or false, and what they feel comfortable saying online. Facebook maintains an increasingly detailed set of Community Standards describing what kinds of content is and is not allowed on its platform, but as our data over the years has shown, the company has long failed to show evidence (like transparency reports) proving that it actually enforces these rules. What are the human rights consequences of creating a two-tiered system like XCheck? Our governance data also shows that Facebook’s human rights due diligence processes hardly scratch the surface of this question.

 

Does Facebook prioritize growth over democracy and the public interest?

What Facebook says publicly: In a 2020 Facebook post, Mark Zuckerberg announced several Facebook policy changes meant to safeguard the platform against threats to the U.S. election, including a ban on political and issue ads, steps to reduce misinformation from going viral, and “strengthened enforcement against militias, conspiracy networks like QAnon, and other groups that could be used to organize violence or civil unrest…”

What the Facebook files prove: These measures stayed in place during the election, but were quickly dissolved after the election because they undermined “virality and growth on its platforms.” Other interventions that might have reduced the spread of violent or conspiracist content around the 2020 U.S. election were rejected by Facebook executives out of fear they would reduce user engagement metrics. Facebook whistleblower Haugen says the company routinely chooses platform growth over safety.

What we know: We know that Facebook’s systems for moderating both organic and ad content, as well as ad targeting, have a tremendous impact on what information people see in their feeds, and what they consequently believe is true. This means that Facebook plays a role in influencing people’s decisions about who to vote for. The company has failed to publish sufficient information about how it moderates these types of content. And while it has published some policies and statements on these processes, Haugen and others have proven that these statements are not always true. See our 2020 data on algorithmic transparency and rule enforcement related to advertising, ad targeting, and organic content.

 

Does Facebook knowingly profit from disinformation?

What Facebook says publicly: In a 2021 House hearing, Mark Zuckerberg deflected the suggestion from Congressman Bill Johnson, a Republican from Ohio, that Facebook has profited from the spread of disinformation.

What the Facebook files prove: Facebook profits from all of the content on its platform. Its algorithmically-fueled, ad-driven business model requires that users stay active on the platform in order to make money from ads.

What we know: As we’ve said before, the company has never been sufficiently transparent about how it builds or uses algorithms.

Automated tools are essential to social media platforms’ content distribution and filtering systems. They are also integral to platforms’ surveillance-based business practices. Yet Facebook (and its competitors) publish very little about how its algorithms and ad targeting systems are designed or governed — our 2020 research showed just how opaque this space really is. Unchecked algorithmic content moderation and ad targeting processes raise significant privacy, freedom of expression, and due process concerns. Without greater transparency around these systems, we cannot hold Facebook accountable to the public. See our 2020 data on human rights impact assessments for targeted advertising and algorithmic systems.

Facebook’s business model lies at the heart of the company’s many failures. Despite the range of harms it brings to people’s lives and rights, Facebook has continued its relentless pursuit of growth. Growth drives advertising, and ad sales account for 98 percent of the company’s revenue. Unless we address these structural dynamics — starting with comprehensive federal privacy legislation in the U.S. — we’ll be treating these symptoms forever, rather than eradicating the disease.

RDR has contributed to the public consultation on the Canadian government’s proposed legislative and regulatory framework to address harmful content online. The framework sets out entities that would be subject to the new framework, what types of content would be regulated, new rules and obligations for regulated entities, and two new regulatory bodies and an advisory body that would oversee the new framework. We believe that efforts to address these harms must promote and uplift freedom of expression and information as well as our fundamental right to privacy. We commend the Canadian government’s objective to create a safe and open internet and have a few recommendations on how the government can tackle the underlying causes of online harms. Read the introduction of our submission below or download it in its entirety here.

Honorable members of the Department of Canadian Heritage:

Ranking Digital Rights (RDR) welcomes this opportunity for public consultation on the Canadian government’s proposed approach to regulating social media and combating harmful content online. We work to promote freedom of expression and privacy on the internet by researching and analyzing how global information and communication companies’ business activities meet, or fail to meet, international human rights standards (see www.rankingdigitalrights.org for more details). We focus on these two rights because they enable and facilitate the enjoyment of the full range of human rights comprising the Universal Declaration of Human Rights (UDHR), especially in the context of the internet.

RDR broadly supports efforts to combat human rights harms that are associated with digital platforms and their products, including the censorship of user speech, incitement to violence, campaigns to undermine free and fair elections, privacy-infringing surveillance activities, and discriminatory advertising practices. But efforts to address these harms need not undermine freedom of expression and information or privacy. We have long advocated for the creation of legislation to make online communication services (OCSs) more accountable and transparent in their content moderation practices and for comprehensive, strictly enforced privacy and data protection legislation.

We commend the Canadian government’s objective to create a “safe, inclusive, and open” internet. The harms associated with the operation of online social media platforms are varied, and Canada’s leadership in this domain can help advance global conversations about how best to promote international human rights and protect users from harm. As drafted, however, the proposed approach fails to meet its stated goals and raises a set of issues that jeopardize freedom of expression and user privacy online. We also note that the framework contradicts commitments Canada has made to the Freedom Online Coalition (FOC) and Global Conference for Media Freedom, as well as previous work initiating the U.N. Human Rights Council’s first resolution on internet freedom in 2012. As Canada prepares to assume the chairmanship of the FOC next year, it is especially important for its government to lead by example. Online freedom begins at home. As RDR’s founder Rebecca MacKinnon emphasized in her 2013 FOC keynote speech in Tunis, “We are not going to have a free and open global Internet if citizens of democracies continue to allow their governments to get away with pervasive surveillance that lacks sufficient transparency and public accountability.”

Like many other well-intentioned policy solutions, the government’s proposal falls into the trap of focusing exclusively on the moderation of user-generated content while ignoring the economic factors that drive platform design and corporate decision-making: the targeted-advertising business model. In other words, restricting specific types of problematic content overlooks the forest for the trees. Regulations that focus on structural factors—i.e., industry advertising practices, user surveillance, and the algorithmic systems that underpin these activities—are better suited to address systemic online harms and, if properly calibrated, more sensitive to human rights considerations. 

In this comment we identify five issues of concern within the proposal and a set of policy recommendations that, if addressed, can strengthen human rights protections and tackle the underlying causes of online harms.

Download our entire submission here.

"Social Decay" artwork by Andrei Lacatusu, licensed for reuse (CC BY-NC-ND 2.0)

“Social Decay” artwork by Andrei Lacatusu, licensed for reuse (CC BY-NC-ND 2.0)

This is the RADAR, Ranking Digital Rights’ newsletter. This special edition was sent on September 23, 2021. Subscribe here to get The RADAR by email.

A bombshell series published last week by the Wall Street Journal shows how Facebook’s insatiable thirst for user data and consequent obsession with keeping users engaged on the platform takes priority over the public interest again and again, even when people’s lives and fundamental rights are at imminent risk of loss. It also provides new evidence that Facebook does not follow its own rules when it comes to moderating online content, especially outside the U.S.

In a September 18 blog post, Facebook’s VP of Global Affairs Nick Clegg wrote that the stories contained “deliberate mischaracterizations” of the company’s work and motives, but he pointed to no specific examples of details that the stories got wrong. The evidence — internal documents, emails, and dozens of interviews with former staff — is difficult to refute. And it is not especially surprising. It builds on a pattern that journalists, researchers, and civil society advocates have been documenting for years.

One story in the series offers a litany of instances in which Facebook employees tried to alert senior managers to brutal abuses of the platform in developing countries, only to have their concerns pushed aside. Former employees told WSJ of company decisions to “allow users to post videos of murders, incitements to violence, government threats against pro-democracy campaigners and advertisements for human trafficking,” despite all these things going against Facebook’s Community Standards. A former executive said that the company characterizes these issues as “simply the cost of doing business.”

Consistent with years of reports and grievances from global civil society, and more recent accounts from whistleblowers, these stories shed new light on Facebook’s long-standing neglect of human rights harms that stem from its platform, but occur far away from Menlo Park. One internal document showed that of all the time staff spent combatting disinformation, only 13 percent of it was devoted to disinfo campaigns outside the U.S.

The company likely prioritizes content moderation in the U.S. because it faces the greatest regulatory and reputational risks on its home turf. But this is no longer the epicenter of its userbase. With its ruthless pursuit of growth in the global south, the reality today is that most Facebook users do not live in stable democracies or enjoy equal protection before the law. As Facebook endlessly connects users with friends, groups, products, and political ads, it creates a virtual minefield — with real life-or-death consequences — for far too many people worldwide.

Think back to late August, when social media lit up with messages of Facebook and Instagram users in Afghanistan frantically working to erase their histories and contact lists. The company offered some “emergency response” measures, allowing Afghan users to limit who could see their feeds and contacts. But on a platform that constantly asks you to share information about yourself, your friends, your activities, and your whereabouts, is a band-aid solution at best.

In situations of violent conflict, contestation of political power, or weak rule of law, the protection of a person’s privacy can mean the protection of their safety, their security, their right to life. Matt Bailey underlined this in a piece for PEN America:

…in a cataclysm like the one the Afghan people are experiencing, this model of continuously accruing data—of permanent identity, publicity, and community—poses a special danger. When disaster strikes, a person can immediately change how they dress, worship, or travel but can’t immediately hide the evidence of what they’ve done in the past. The assumptions that are built into these platforms do not account for the tactical need of Afghan people today to appear to be someone different from who they were two weeks ago.

But this is not just a problem for people in Afghanistan, or Myanmar, or India, or Palestine, where some of the company’s more egregious acts of neglect have played out and received at least some attention in the West. The problem is systemic.

Facebook employees often cite “scale” as a reason why the company will never be able to consider every human rights violation or scrub all harmful content from its platform. But how exactly did Facebook come to operate at such an awesome scale? Perhaps more than any other social media platform, Facebook has cannibalized competitors and collected and monetized user data at an astonishing rate, putting these things ahead of all other interests, including the human rights of its 3 billion users.

Our work at Ranking Digital Rights rests on the principle that regardless of scale, companies have a responsibility to respect human rights, and that they must carry this out (as written in the UN Guiding Principles on Business and Human Rights) “to avoid infringing on the rights of others and address adverse impacts with which they are involved.” We push companies to commit to respecting human rights across their business practices, and then push them to implement these commitments at every level of their organization. Facebook made such a commitment earlier this year. But to what end?

As evidence of its disregard for people’s rights continues piling up, Facebook’s promises ring hollow, as do its lackluster efforts to improve transparency reporting and carry out (and actually act upon) human rights due diligence. Today, leaks from whistleblowers and former employees seem like the only reliable source of information about how this company actually operates.

For us, this begs the question: How valuable is it to assess Facebook’s policies alone? In this case, and with some of the other tech giants we rank, would it be more effective to expand our focus to include leaks and other hard evidence of corporate practice?

We don’t have all the answers yet, but as revelations like these become more and more frequent, we will continue asking these questions of ourselves, our peers, and the companies we rank. If tech companies do not want to tell the world how they work, how they profit, and how they factor the public interest into their bottom line, we will need to find new ways to force their hand.

Facebook is an ad tech company. That’s how we should regulate it.

RDR Senior Policy and Partnerships Manager Nathalie Maréchal is calling on platform accountability advocates to start following the money when it comes to regulating Facebook. In a recent piece for Tech Policy Press, Maréchal wrote:

[We] need to reframe the ‘social media governance’ conversation as one about regulating ad tech. Facebook, Twitter, YouTube, TikTok and the rest exist for one purpose: to generate ad revenue. Everything else is a means for producing ad inventory.

Maréchal also spoke with The Markup’s Aaron Sankin about Facebook’s claim that it supports internet regulations that would mandate specific approaches to content moderation. We think content moderation is important and raises really difficult questions, but we can’t let this distract us from ads, which are the main driver of Facebook’s profits.

“…[As] long as everyone is focused on user content and all of its discontents, we are not talking about advertising. We are not talking about the money,” Maréchal said. Read via The Markup

Telenor mobile shop in Yangon, Myanmar. Photo by Remko Tanis via Flickr (CC BY-NC-ND 2.0)

Telenor mobile shop in Yangon, Myanmar. Photo by Remko Tanis via Flickr (CC BY-NC-ND 2.0)

Another company in crisis: Telenor’s fraught departure from Myanmar

In July, Norwegian telecommunications firm Telenor announced plans to sell its subsidiary in Myanmar to M1 Group, a Lebanese conglomerate with a record of corrupt practices and human rights abuses. Since then, it has come to light that the Myanmar military, which took control of the country in a February 1 coup, ordered telecommunications providers to install surveillance technology on their networks to help boost the military’s snooping capacity. The sale has yet to be approved by the military regime, and industry sources cited by Nikkei Asia suspect the deal may be rejected.

Human rights advocates in Myanmar and around the world have been pushing Telenor to take responsibility for its human rights obligations and stand up against military demands. In August, RDR joined a coalition letter to Telenor Group board chair Gunn Wærsted calling for the company to either cancel or pause the sale in order to carry out robust due diligence measures, including consultation with local civil society, and publication of human rights impact assessments on the effects of the sale.

What’s new at RDR?

Changes are coming to the RDR Index! This spring, we looked back on five years of the RDR Corporate Accountability Index and made a major decision: In 2022, we will split our flagship research product into two separate rankings. Next April, we will release a new ranking of digital platforms. In October 2022, we expect to publish a new ranking of telecommunications companies. This approach will allow us to dedicate more time to studying the contexts in which these companies operate and to streamline our engagement efforts around all of the companies we rank.

The 2020 RDR Index, now in translation: The executive summary of the 2020 RDR Index is now available in six major languages: Arabic, Chinese, French, Korean, Russian, and Spanish! As in years past, we partnered with Global Voices Translation Services to translate these key components of our research. Check them out.

#KeepItOn: Campaign letters to prevent network shutdowns in Russia, Zambia, Ethiopia

As members of Access Now’s #KeepItOn campaign coalition to prevent network shutdowns worldwide, we supported the following advocacy letters in recent months:

EVENTS
Tech Policy Press symposium | Reconciling Social Media & Democracy
October 7 at 1:00 PM ET | Register here
At this convening to discuss various proposals to regulate the social media ecosystem, Nathalie Maréchal will join panelists including Francis Fukuyama, Cory Doctorow, and Daphne Keller to promote an approach to corporate governance that can advance human rights.

speech bubbles showing different languages

Translation by Icon Lauk via The Noun Project, CC BY.

Ranking Digital Rights has partnered with Global Voices Translation Services to translate key components of the 2020 RDR Corporate Accountability Index into six major languages: Arabic, Chinese, French, Korean, Russian, and Spanish!
Visit our translations page.

The RDR Index is global in scope. We evaluate 26 companies, whose products and services are used by over four billion people worldwide, in all kinds of cultures and contexts. The languages of our translations reflect this diversity, covering the most commonly spoken languages in the countries where the companies we rank are located.
We believe in strengthening and upholding the role of local civil society organizations, researchers, and advocates. Making our resources available in multiple languages is a key part of how we think about the reach and impact of our corporate accountability research and engagement. In September 2020, we published translations of our revised methodology, so that anyone around the world can use our standards to hold companies accountable and build unique advocacy campaigns.

I’m RDR’s global partnerships manager. Since joining RDR in February this year, I’ve been working to build partnerships and engage civil society groups all over the world. We want to encourage further scrutiny of technology companies across different countries and regions, particularly those that our in-house research does not cover. To achieve this, we’re nurturing relationships to collaborate with our allies and support their work, while continuing work to make our materials accessible to a broad range of stakeholders. This also includes developing resources and workflows to provide direct guidance to adapt RDR’s methodology to the specific goals and local contexts of our partners.
With these translations, we hope to support broader advocacy actions that can leverage our analysis and data and bring closer attention to our rigorous human rights standards.

It takes a village: We thank Global Voices for their work on the translations, as well as our regional partners for their help in reviewing and promoting these materials!

Get in touch: If you’re a researcher or advocate interested in learning more about our methodology, our team would love to talk to you! Write to us at info@rankingdigitalrights.org.