Meta Platforms, Inc.
Headquartered in the United States, Meta offers some of the world’s most popular social networking and messaging services, including Facebook, Instagram, Messenger, and WhatsApp, which collectively had an estimated 3.6 billion monthly active users worldwide in 2021. The vast majority of Meta’s revenue is derived from advertising.
Meta had another year of tumult in the public eye and lackluster performance in our evaluation. Documents released by former-employee-turned-whistleblower Frances Haugen corroborated years of accusations and grievances from global civil society concerning human rights harms stemming from the company’s services. They also brought irrefutable proof that Meta routinely breaks or ignores its own rules, especially outside the U.S.
In 2021, we saw plenty of new evidence of such problems. Meta neglected a scourge of harmful content in non-Western countries, including state-backed manipulation campaigns in Azerbaijan and Honduras. It was slow to address hate speech in India, and incitement to mob violence by Israeli extremists against Palestinians on WhatsApp. Meta did hire the independent firm BSR to conduct a human rights-based assessment of its impacts in Palestine during the period of escalated violence that took place in May and June of 2021, but BSR’s findings had not been released at the time of publication. In contrast to these crises, Meta responded immediately to Russia’s invasion of Ukraine in February 2022, devoting extra staffing to content review and publishing regular updates on how it handles war-related disinformation and hate speech.
It was no surprise to find that Meta’s content moderation policies lacked clarity and consistency, and that the company failed to show clear evidence that it conducts human rights impact assessments of its terms of service enforcement. It also provided incomplete data about actions it takes to restrict content and accounts violating its rules. This is especially important in light of the company’s decision to temporarily suspend the account of former U.S. President Donald Trump following the January 6 attack on the U.S. Capitol, and the subsequent release of evidence (also by whistleblower Haugen) that Meta had been giving special treatment to the accounts of high-profile politicians and celebrities through its “XCheck” program.
Meta's dual-class share structure continued to be a key factor undercutting efforts to hold the company to account for these harms. Under this structure, CEO Mark Zuckerberg retains 57% of voting power. Shareholders have proposed resolutions to scrap this structure every year since 2014. In 2021, without Zuckerberg’s votes, this resolution would have netted 90% support.
The company’s name change (from Facebook to Meta) and its stated intention to focus on developing a future virtual reality-driven “metaverse” signal a strong inclination to look ahead and build new technologies. This begs the question: How can the company uphold its human rights obligations if it does not first reflect on the harms it has caused and address its many existing policies and practices that so urgently need repair?
The 2022 Big Tech Scorecard covers policies that were active on November 1, 2021. Policies that came into effect after November 1, 2021, were not evaluated for this ranking.
Scores reflect the average score across the services we evaluated, with each service weighted equally.
We rank companies on their governance, and on their policies and practices affecting freedom of expression and privacy.
Meta tied with Microsoft for first place in governance among digital platforms. The company published a clear commitment to protect and respect privacy and freedom of expression and information, and also published a new human rights policy in which it pledged to let human rights guide its development and use of AI. This policy did not say whether Meta would fully adhere to international human rights standards in these activities (G1). Despite the presence of Meta’s Free Basics program in numerous countries, the company published no evidence to suggest that it conducts human rights due diligence on its deployment of zero-rating schemes (G4e).
Meta ranked fifth in this category, lacking transparency about its policies affecting users’ freedom of expression and information, including ad-content and ad-targeting rules. The company disclosed some information on how it uses algorithms to curate, rank, and recommend content on Facebook, but this information was incomplete. Disclosures related to algorithms were weaker for Instagram than for Facebook. The company failed to explain how users can control the variables that Instagram’s algorithmic systems take into account, and whether or not these systems are on by default (F12). Meta provided no proof of whether or how it enforces its advertising content and targeting rules (F4c).
Lagging behind all but one of its U.S. peers (Amazon), Meta was clear about how it responds to government demands for user information (P10a) but failed to explain how it handles that information internally. Although Meta’s data policy provided a clear overview of what data Facebook collects (P3a), the company only described isolated examples of the data it infers (P3b). Meta had the lowest score of any digital platform we evaluated on its transparency regarding options for users to control how their data is collected, inferred, retained, and processed (P7).