Digital platforms

Meta Platforms, Inc.

Rank: 5th
Score: 46%

Headquartered in the United States, Meta offers some of the world’s most popular social networking and messaging services, including Facebook, Instagram, Messenger, and WhatsApp, which collectively had an estimated 3.6 billion monthly active users worldwide in 2021. The vast majority of Meta’s revenue is derived from advertising.

Yahoo2
54%
Google4
47%
Meta5
46%
Apple6
44%
Kakao6
44%
Yandex8
35%
Baidu9
28%
VK9
28%
Alibaba11
26%
Samsung11
26%
Amazon13
25%
Tencent13
25%

Meta had another year of tumult in the public eye and lackluster performance in our evaluation. Documents released by former-employee-turned-whistleblower Frances Haugen corroborated years of accusations and grievances from global civil society concerning human rights harms stemming from the company’s services. They also brought irrefutable proof that Meta routinely breaks or ignores its own rules, especially outside the U.S.

In 2021, we saw plenty of new evidence of such problems. Meta neglected a scourge of harmful content in non-Western countries, including state-backed manipulation campaigns in Azerbaijan and Honduras. It was slow to address hate speech in India, and incitement to mob violence by Israeli extremists against Palestinians on WhatsApp. Meta did hire the independent firm BSR to conduct a human rights-based assessment of its impacts in Palestine during the period of escalated violence that took place in May and June of 2021, but BSR’s findings had not been released at the time of publication. In contrast to these crises, Meta responded immediately to Russia’s invasion of Ukraine in February 2022, devoting extra staffing to content review and publishing regular updates on how it handles war-related disinformation and hate speech.

It was no surprise to find that Meta’s content moderation policies lacked clarity and consistency, and that the company failed to show clear evidence that it conducts human rights impact assessments of its terms of service enforcement. It also provided incomplete data about actions it takes to restrict content and accounts violating its rules. This is especially important in light of the company’s decision to temporarily suspend the account of former U.S. President Donald Trump following the January 6 attack on the U.S. Capitol, and the subsequent release of evidence (also by whistleblower Haugen) that Meta had been giving special treatment to the accounts of high-profile politicians and celebrities through its “XCheck” program.

Meta's dual-class share structure continued to be a key factor undercutting efforts to hold the company to account for these harms. Under this structure, CEO Mark Zuckerberg retains 57% of voting power. Shareholders have proposed resolutions to scrap this structure every year since 2014. In 2021, without Zuckerberg’s votes, this resolution would have netted 90% support.

The company’s name change (from Facebook to Meta) and its stated intention to focus on developing a future virtual reality-driven “metaverse” signal a strong inclination to look ahead and build new technologies. This begs the question: How can the company uphold its human rights obligations if it does not first reflect on the harms it has caused and address its many existing policies and practices that so urgently need repair?

Key takeaways

  • Meta failed to show evidence that it conducts systematic human rights impact assessments in a variety of areas, including terms of service enforcement, targeted advertising policies and practices, development and deployment of algorithmic systems, and its zero-rating programs.
  • Meta’s transparency reports, which still cover only Facebook and Instagram, had a critical gap: advertising. The company offered only fragmented data on content and account restrictions and no data on the number of ads the company restricts.
  • Meta was not transparent about how it handles user data. It provided incomplete information about its data-inference practices and limited options for users to control their data. It also failed to describe whether or how it acquires and processes user data through purchases, data-sharing agreements, and other contractual relationships with third parties.

Key recommendations

  • Improve human rights due diligence. Meta should carry out comprehensive human rights impact assessments on its own policy enforcement, targeted-advertising practices, algorithmic use and development policies, and zero-rating partnerships.
  • Be more transparent about government censorship demands. Meta should clarify its process for responding to government censorship demands targeting content and accounts of WhatsApp and Facebook Messenger users. It should expand the data it publishes about these types of demands by including a breakdown of the total quantity of demands per country, listing the number of accounts affected, and identifying the subject matter associated with those demands.
  • Clarify handling of user information. Meta should be more transparent about its data-inference practices and provide its users with better options and tools to control their information. It should also clarify whether (and how) it acquires user information from third parties through non-technical means.
  • Include data in the transparency report about advertising rules enforcement. Meta should share the volume and nature of actions it takes to restrict advertising content that violates its advertising content policies and advertising targeting policies.

Services evaluated:

  • Facebook
  • Instagram
  • WhatsApp
  • Messenger
  • Market cap: $583.60 billion (as of April 13, 2022)
  • NasdaqGS: FB
  • Stock structure: Multi-class. Class A shareholders receive one vote per share; Class B (insider) shareholders receive ten votes per share.
  • Read more about how stock structures can be a barrier to shareholder participation
  • Website: https://www.meta.com

The 2022 Big Tech Scorecard covers policies that were active on November 1, 2021. Policies that came into effect after November 1, 2021, were not evaluated for this ranking.

Scores reflect the average score across the services we evaluated, with each service weighted equally.

  • Lead researchers: Veszna Wessenauer, Afef Abrougui

Changes since 2020

  • In a new human rights policy released in 2021, Meta disclosed that human rights guide its use and development of artificial intelligence, although it did not ground this commitment in international human rights standards.
  • Instagram lost points on remedy. The company published a blog post in 2018 that offered a time frame for reviewing appeals of content removals on Instagram, but this information was never incorporated into formal company policy. The blog post is now out of date by our standards, so we are not giving Meta credit on this element.
  • WhatsApp users lost important mechanisms for controlling their data. In our 2020 evaluation, WhatsApp's privacy policy indicated that users could choose not to have their account information shared with Facebook. In our 2022 evaluations, we found this statement had been removed from the privacy policy. Meta also added new details indicating that users' data is being used for advertising without any opt-out or control options.

Scores since 2017

100%0%2017201820192020202253%55%57%45%46%
Most companies’ scores dropped between 2019 and 2020 with the inclusion of our new indicators on targeted advertising and algorithmic systems. To learn more, please visit our Methodology development archive.
Governance65%
Freedom of expression36%
Privacy46%

We rank companies on their governance, and on their policies and practices affecting freedom of expression and privacy.

Governance 65%

Meta tied with Microsoft for first place in governance among digital platforms. The company published a clear commitment to protect and respect privacy and freedom of expression and information, and also published a new human rights policy in which it pledged to let human rights guide its development and use of AI. This policy did not say whether Meta would fully adhere to international human rights standards in these activities (G1). Despite the presence of Meta’s Free Basics program in numerous countries, the company published no evidence to suggest that it conducts human rights due diligence on its deployment of zero-rating schemes (G4e).

Freedom of expression 36%

Meta ranked fifth in this category, lacking transparency about its policies affecting users’ freedom of expression and information, including ad-content and ad-targeting rules. The company disclosed some information on how it uses algorithms to curate, rank, and recommend content on Facebook, but this information was incomplete. Disclosures related to algorithms were weaker for Instagram than for Facebook. The company failed to explain how users can control the variables that Instagram’s algorithmic systems take into account, and whether or not these systems are on by default (F12). Meta provided no proof of whether or how it enforces its advertising content and targeting rules (F4c).

Privacy 46%

Lagging behind all but one of its U.S. peers (Amazon), Meta was clear about how it responds to government demands for user information (P10a) but failed to explain how it handles that information internally. Although Meta’s data policy provided a clear overview of what data Facebook collects (P3a), the company only described isolated examples of the data it infers (P3b). Meta had the lowest score of any digital platform we evaluated on its transparency regarding options for users to control how their data is collected, inferred, retained, and processed (P7).