Digital platforms

Meta Platforms, Inc.

Rank: 3rd
Score: 47%

Headquartered in the United States, Meta offers some of the world’s most popular social networking and messaging services, including Facebook, Instagram, Messenger, and WhatsApp, which collectively had an estimated 3.35 billion daily active users worldwide in 2024. The vast majority of Meta’s revenue is derived from advertising.

Meta3
47%
Apple4
44%
Kakao4
44%
X7
40%
Yandex8
37%
Baidu9
33%
Tencent11
30%
Samsung12
28%
Amazon13
27%
VK13
27%

In January 2025, Meta introduced drastic changes to its content moderation policies, promising “more speech and fewer mistakes.” As part of these changes, the company loosened its “hate speech” policy and renamed it to a “hateful conduct” policy. Under the new policy, the company no longer removes “statements of inferiority, expressions of contempt or disgust; cursing; and calls for exclusion or segregation” and the “the usage of slurs that are used to attack people on the basis of their protected characteristics.” The new policy further allows for “sex- or gender-exclusive language,” for instance, when discussing access to bathrooms or joining law enforcement or the military. The company also announced an end to its independent fact-checking program, which will be replaced by Community Notes. This approach, following in the footsteps of X under Elon Musk, outsources fact-checking to users.

Meta further changed the way it enforces its rules by limiting the automated detection of policy violations to flag only illegal activity and "high-severity violations, like terrorism, child sexual exploitation, drugs, fraud and scams.” Previously it deployed automated systems to proactively scan content for all violations of its rules. Both in the U.S. and globally, civil society groups, journalists, and experts have sounded the alarm over the impacts of these changes on human rights, the free flow of factual information, the rise of hate and toxicity, as well as the exclusion of marginalized communities and voices. Prior to these changes, Meta had been facing criticism from civil society for “systematically” silencing Palestinian voices in the context of the Israeli war on Gaza. The company also ran misleading advertisements on Facebook and Instagram that attacked the pro-Palestine movement, conflating support for Palestine with support for Hamas, a designated terror group in the U.S.

On the privacy front, the company was hit with probes, complaints, and fines over its data practices, including a USD 200 million fine in 2024 from Nigeria's Federal Competition and Consumer Protection Commission for “unauthorised transfer and sharing of Nigerian data.” Following complaints, it was forced to pause its use of user data to train AI models in both Brazil and the EU.

Meta’s performance stagnated in the 2025 RDR Index, with its overall score improving by just 1%. While it did commit to protecting human rights, users were provided with limited mechanisms to submit freedom of expression complaints and no mechanisms for reporting their privacy concerns. It disclosed a process for users to appeal content moderation actions on Facebook and Instagram, but not on Messenger and Whatsapp. Additionally, it disclosed limited information on assessing privacy and discrimination risks associated with its policy enforcement process[1]. The company assessed some freedom of expression and discrimination risks associated with its use of algorithms to detect and moderate content in Israel and Palestine in May 2021. However, it was unclear if any of these assessments were part of regular and systematic due diligence on the company's policy enforcement processes and use of algorithmic systems.

Key takeaways

  • Meta disclosed a robust due diligence process on government regulations, but it lacked transparency on its assessments of human rights risks associated with its policy enforcement processes and deployment and development of algorithmic systems. It did not disclose any information about its due diligence processes for its targeted advertising policies and practices or its zero-rating program.
  • Meta provided no remedy mechanisms to address users’ privacy complaints. Its freedom of expression remedy mechanism was limited in scope, covering only some types of content restrictions for Facebook, Messenger, and Instagram. The company did not disclose any remedy mechanisms for WhatsApp.
  • Meta’s Community Standards Enforcement Report provided some data on the amount of content and accounts restricted, broken down per type of violation and the method used to identify the violation. However, this limited data was only provided for Facebook and Instagram. The company provided no data on the number of advertisements it restricted.

Key recommendations

  • Improve remedy. The company should ensure that its remedy mechanisms cover all freedom of expression violations and enable users to submit privacy complaints. It should also provide Messenger and WhatsApp users with the ability to appeal content moderation actions.
  • Clarify handling of user information. Meta should be more transparent about its data inference practices and provide its users with better options and tools to control their information. It should also clarify whether, and how, it acquires user information from third parties through non-technical means, such as cookies.
  • Improve security policies. The company should limit and monitor employee access to user information across its services and commission third-party security audits on its products and services. It should also implement end-to-end encryption for user communications on Instagram by default and disclose a clear process for responding to data breaches.

Services evaluated:

  • Facebook
  • Instagram
  • WhatsApp
  • Messenger
  • Market cap: USD 1.48 trillion (as of April 1, 2025)
  • NasdaqGS: META
  • Stock structure: Multi-class. Class A shareholders receive one vote per share; Class B (insider) shareholders receive ten votes per share.
  • Read more about how stock structures can be a barrier to shareholder participation
  • Website: https://www.meta.com

The 2025 RDR Index: Big Tech Edition covers policies that were active on August 1, 2024. Policies that came into effect after August 1, 2024, were not evaluated for this benchmark.

Scores reflect the average score across the services we evaluated, with each service weighted equally.

  • Lead researchers: Afef Abrougui, Veszna Wessenauer

Changes since 2022

  • Meta no longer disclosed a human rights impact assessment of discrimination risks associated with its targeted advertising policies and practices.
  • Meta clarified its use of algorithmic systems to curate, rank, and recommend content on Facebook and Instagram, and it provided Instagram users with options to control the variables that the systems take into account.
  • Meta disclosed that it has a security team in place conducting security audits on Instagram and Whatsapp.
  • Instagram began implementing end-to-end encryption for user chats.

Scores since 2017

100%0%20172018201920202022202553%55%57%45%46%47%
Most companies’ scores dropped between 2019 and 2020 with the inclusion of our new indicators on targeted advertising and algorithmic systems. To learn more, please visit our Methodology development archive.
Governance64%
Freedom of expression38%
Privacy46%

We rank companies on their governance, and on their policies and practices affecting freedom of expression and privacy.

Governance 64%

Meta received the highest score in this category among all companies. It had an explicit, clearly articulated policy commitment to freedom of expression and information and the right to privacy. However, it did not make an explicit commitment to human rights in its development and deployment of algorithmic systems, stating only that it is guided by, and recognizes, the importance of human rights in this area (G1). It disclosed board-, executive-, and management-level oversight of how its practices affect human rights (G2). It also provided its employees with training on freedom of expression and privacy issues and had an employee whistleblower program, allowing employees to raise human rights concerns (G3). Further, the company provided relatively strong disclosures about its assessments of risks related to government regulations (G4a). However, it disclosed little information about human rights impact assessments related to its own policy enforcement and its development and use of algorithmic systems (G4b, G4d). It provided no information on whether it assesses risks associated with its targeted advertising policies and practices (G4c). Moreover, the civil rights audit the company conducted in 2020, which included an assessment of discrimination risks associated with its political advertising in the U.S., became outdated. The company provided users with remedy mechanisms covering some freedom of expression complaints for Facebook, Instagram, and Messenger. However, it did not provide remedy mechanisms for WhatsApp or disclose a process for users to submit privacy complaints (G6a).

Freedom of expression 38%

Meta ranked sixth in the freedom of expression category. It disclosed the types of content and activities it does not allow and its process for enforcing these rules (F3a). It was less transparent about its advertising content and advertisement targeting rules and how it enforces them (F3b, F3c). It published some data abouts its content and account restrictions to enforce its terms of service, but this covered only Facebook and Instagram (F4a,b). It provided no data about advertisements it removed for violating its advertisement content and targeting rules (F4c). Further, the company lacked clarity about its handling of government censorship demands (F5a). While it was transparent about how it responds to such demands for Facebook and Instagram, it disclosed no such process for Messenger and WhatsApp. Its transparency report on government censorship demands did not specify the number of demands the company received and still failed to include data on Messenger and WhatsApp (F6).

Privacy 46%

Meta ranked fourth in the privacy category, ahead of its U.S. peers Apple, Microsoft, and Google. Although the company disclosed the user information it collects (P3a), it was not transparent about which user information it infers, disclosing only that it infers user interests on Facebook (P3b). It disclosed some of the information it shares with third parties (P4) and provided limited information on its data retention policies (P6) and user options to control their information (P7). Further, it disclosed comprehensive information about its process for responding to government demands for user information (P10a) and published a transparency report that provided some insights into the number of demands it received and its compliance rates (P11a). However, the company disclosed nothing about its processes for responding to private requests for user information and reported no data about these requests (P10b, P11b). The company had a security team in place conducting security audits on its products and services but failed to mention if it commissions third-party security audits (P13). The company was not transparent about how it responds to data breaches (P15).

Indicators

Footnotes

[1] In November 2024, the EU released the risk assessment reports submitted by very large online platforms, including Facebook and Instagram, as required by the EU Digital Services Act. However, the reports became available after the policy cut-off date and were therefore not taken into consideration as part of this assessment.