Digital platforms

Facebook, Inc.

Rank: 5th
Score: 45%

Headquartered in the United States, Facebook offers some of the world’s most popular social networking and messaging services, including Facebook, Instagram, Messenger, and WhatsApp, which collectively have an estimated 3.21 billion active users worldwide.[1]

Facebook ranked fifth among digital platforms. The company made plenty of headlines in 2020, but little progress in the RDR Index. Facebook announced a range of new content rules in response to the spread of disinformation about both the U.S. election and the COVID-19 pandemic. It faced a record $5 billion penalty from the U.S. Federal Trade Commission (FTC) for privacy violations, intense antitrust scrutiny, and a major ad boycott protesting lackluster enforcement of its policies on hate speech and incitement to violence. Although Facebook introduced some accountability processes, including the Oversight Board, it made only marginal progress in all three categories of the RDR Index.

Key takeaways

  • Facebook was weak on human rights due diligence, and failed to show clear evidence that it conducts systematic impact assessments of its terms of service enforcement, on its targeted advertising policies and practices, on its development and deployment of algorithmic systems, or on its zero rating programs.
  • Facebook’s transparency reporting had a critical gap: advertising. The company gradually expanded the scope of its policy enforcement report to cover Instagram in addition to Facebook, but the report offered only fragmented data on content and account restrictions and no data on the number of ads the company restricts.
  • Facebook lacked transparency about how it develops and uses algorithms. It published a policy describing some of the ways its flagship social media service uses algorithmic systems, but offered no policy on how it develops them on the platform. The information it published on how it uses algorithms in areas such as appeals and ranking systems was opaque.

Key recommendations

  • Improve human rights due diligence. Facebook should carry out comprehensive human rights impact assessments on its own policy enforcement, targeted advertising practices, algorithmic use and development, and zero-rating partnerships.
  • Expand transparency on enforcing content moderation rules. Facebook should significantly improve its transparency on and accountability for its content moderation by publishing consistent data on actions it takes to enforce platform rules, including its ad content and targeting policies. Reports should be organized by country, type of restriction, and content type.
  • Commit to human rights in the development and use of algorithms. Facebook should explicitly commit to following international human rights standards in developing and using algorithms and publish information about how they are developed and used across its operations, including in the enforcement of its content policies.

Services evaluated:

The 2020 RDR Index covers policies that were active between February 8, 2019, and September 15, 2020. Policies that came into effect after September 15, 2020 were not evaluated for this Index.

Scores reflect the average score across the services we evaluated, with each service weighted equally.

  • Lead researchers: Jan Rydzak, Veszna Wessenauer

Changes since 2019

  • Facebook improved its security policies by clarifying protocols for preventing unauthorized employee access to user data and committing to notify users in cases of data breaches.
  • As a result of a settlement with the FTC, Facebook committed to conduct privacy impact assessments on the enforcement of its privacy policies.
  • Facebook’s revised privacy policy removed previous information about its data retention policies.
+ 0.89 points

Gained 0.89 points on comparable indicators since the 2019 RDR Index.

Governance62%
Freedom of expression35%
Privacy46%

We rank companies on their governance, and on their policies and practices affecting freedom of expression and privacy.

Governance 62%

Facebook had the third-highest governance score among digital platforms, but fell short in several areas, notably its transparency on human rights impact assessments and remedy.

  • Commitment to human rights: While Facebook published a commitment to protect and respect privacy and freedom of expression and information, the company has yet to publish a commitment to adhere to human rights principles as it develops and deploys algorithms (G1).
  • Human rights due diligence: Facebook’s human rights due diligence largely failed to focus on areas key to its business model. Facebook conducted or commissioned assessments of human rights impacts in several countries where it operates, releasing summaries of three of them in 2020 (G4a). It showed some additional progress through its settlement with the FTC, under which Facebook is obligated to assess the impact of its privacy policies and practices on its users (G4b). Facebook’s third-party civil rights audit evaluated the impact of both the company’s ad targeting practices (G4c) and algorithmic systems (G4d). This made Facebook the only company to publish evidence of some form of impact assessment on targeted advertising (G4c). But the audit failed to consider human rights impacts other than discrimination and limited its scope to the United States, and it was unclear if Facebook would build such assessments into its activities and conduct them on a regular basis. Despite the presence of Facebook’s Free Basics program in numerous countries, the company published nothing to suggest that it conducts human rights due diligence on its deployment of zero-rating schemes (G4e).
  • Stakeholder engagement: Facebook is a member of the Global Network Initiative (GNI), a multi-stakeholder organization. However, GNI focuses primarily on government demands and does not cover a wider set of human rights issues that internet users face (G5).
  • Remedy: Facebook disclosed little about its remedy procedures (G6a), but it scored the highest of any digital platform we evaluated on content moderation appeals (G6b). It provided details on its appeals mechanisms for both Facebook and Instagram, though not for WhatsApp or Messenger.

Freedom of expression 35%

Facebook published information about its own content rules, including ad content and ad targeting rules, but provided little proof of how it enforces these policies.

  • Content moderation: Facebook’s content rules were easy to find and understand (F1a), but they did not commit to notify users of policy changes. The company also removed previous versions of its terms for Facebook and Messenger from its website (F2a). Facebook had strong disclosure on what content is prohibited and how it enforces its content rules (F3a). The company published a quarterly enforcement report, which gradually encompassed more categories of content that Facebook removed for violations of its Community Standards. In 2019, the company added Instagram to this report. Still, it conflated different restriction types and failed to disclose figures by country, among other shortcomings (F4a, b). Facebook’s bot policy lacked clarity, including on how it is enforced (F13).
  • Algorithmic use and content curation: Facebook published a policy covering some of the ways it uses algorithms on its flagship social networking platform. This policy—the News Feed Publisher Principle Guidelines—was difficult to find, and we found no similar policies for Instagram, WhatsApp, or Messenger (F1d). Facebook also disclosed selected information on how it uses algorithms to curate, rank, and recommend content on Facebook, but this information was incomplete. It also largely failed to account for similar systems on Instagram(F12).
  • Advertising content and targeting: Ad content policies for Facebook were easy to find, but less so for Instagram. WhatsApp offered no ad content policy at all (F1b). The company’s ad targeting rules were scattered across numerous policies (F1c). Ad content and targeting rules did not specify the technologies used to scan ads for potential violations (F3b, c), nor did Facebook provide any proof of enforcement by publishing data about the actions it took to restrict content or accounts for violating its ad content or targeting rules (F4c).
  • Censorship demands: Facebook disclosed comprehensive information on how it responds to government demands to restrict content and accounts on Facebook and Instagram, although we found no similar information for WhatsApp or Messenger (F5a). The company failed to disclose adequate data about its compliance with these demands, notably by reporting only those with which it complied rather than those it received (F6). The company published limited information about how it responds to private requests to restrict content, such as when private parties allege copyright and intellectual property violations (F5b). The data Facebook published on private requests was inconsistent and limited to Facebook and Instagram (F7).

Privacy 46%

Facebook had strong disclosure of government demands for user information and made slight improvements on security, but still lagged behind nearly all of its U.S. peers.

  • Handling of user data: Although Facebook’s Data Policy contained information on what data Facebook collects (P3a), the company only described isolated examples of the data it infers (P3b). Facebook had the second-lowest score of any digital platform we evaluated on its transparency regarding options for users to control how their data is collected, inferred, retained, and processed (P7). It also failed to describe whether or how it acquires and processes user data through purchases, data-sharing agreements, and other contractual relationships with third parties (P9). Like all other digital platforms, Facebook did not publish a policy explaining how it develops and trains its algorithms with user information and other data (P1b).
  • Government and private demands for user data: Facebook disclosed comprehensive information about its process for responding to government demands for user information (P10a). It released data on these requests in a transparency report but aggregated or failed to report it in some areas (P11a). Like other U.S. companies, Facebook did not divulge the exact number of requests received for user data under the Foreign Intelligence Surveillance Act or National Security Letters, or the actions it took in response to these requests, since it is prohibited by law from doing so. For all other countries (excluding the U.S.), it did not distinguish between requests for real-time and stored communications or between requests for content and non-content data. The company disclosed nothing about its processes for responding to private requests and reported no data about these requests (P10b, P11b).
  • Security: Facebook improved its security policies in response to an 2019 FTC settlement that required stronger security measures, including safeguards limiting employee access to user data (P13). But Facebook still lacked clear disclosure of how it handles data breaches (P15), and it failed to provide account security measures beyond two-factor authentication on Instagram and WhatsApp while offering them on Facebook and Messenger (P17).

Footnotes

[1] Facebook, “Facebook Q3 2020 Results,” October 29, 2020, https://s21.q4cdn.com/399680738/files/doc_financials/2020/q3/FB-Q3-2020-Earnings-Presentation.pdf. The number is based on the company’s “Family Monthly Active People” (MAP) metric, which estimates the number of unique, registered, and logged-in users who visited at least one of these services in the 30 days leading up to the measurement date.