Meta Platforms, Inc.
Headquartered in the United States, Meta offers some of the world’s most popular social networking and messaging services, including Facebook, Instagram, Messenger, and WhatsApp, which collectively had an estimated 3.35 billion daily active users worldwide in 2024. The vast majority of Meta’s revenue is derived from advertising.
In January 2025, Meta introduced drastic changes to its content moderation policies, promising “more speech and fewer mistakes.” As part of these changes, the company loosened its “hate speech” policy and renamed it to a “hateful conduct” policy. Under the new policy, the company no longer removes “statements of inferiority, expressions of contempt or disgust; cursing; and calls for exclusion or segregation” and the “the usage of slurs that are used to attack people on the basis of their protected characteristics.” The new policy further allows for “sex- or gender-exclusive language,” for instance, when discussing access to bathrooms or joining law enforcement or the military. The company also announced an end to its independent fact-checking program, which will be replaced by Community Notes. This approach, following in the footsteps of X under Elon Musk, outsources fact-checking to users.
Meta further changed the way it enforces its rules by limiting the automated detection of policy violations to flag only illegal activity and "high-severity violations, like terrorism, child sexual exploitation, drugs, fraud and scams.” Previously it deployed automated systems to proactively scan content for all violations of its rules. Both in the U.S. and globally, civil society groups, journalists, and experts have sounded the alarm over the impacts of these changes on human rights, the free flow of factual information, the rise of hate and toxicity, as well as the exclusion of marginalized communities and voices. Prior to these changes, Meta had been facing criticism from civil society for “systematically” silencing Palestinian voices in the context of the Israeli war on Gaza. The company also ran misleading advertisements on Facebook and Instagram that attacked the pro-Palestine movement, conflating support for Palestine with support for Hamas, a designated terror group in the U.S.
On the privacy front, the company was hit with probes, complaints, and fines over its data practices, including a USD 200 million fine in 2024 from Nigeria's Federal Competition and Consumer Protection Commission for “unauthorised transfer and sharing of Nigerian data.” Following complaints, it was forced to pause its use of user data to train AI models in both Brazil and the EU.
Meta’s performance stagnated in the 2025 RDR Index, with its overall score improving by just 1%. While it did commit to protecting human rights, users were provided with limited mechanisms to submit freedom of expression complaints and no mechanisms for reporting their privacy concerns. It disclosed a process for users to appeal content moderation actions on Facebook and Instagram, but not on Messenger and Whatsapp. Additionally, it disclosed limited information on assessing privacy and discrimination risks associated with its policy enforcement process[1]. The company assessed some freedom of expression and discrimination risks associated with its use of algorithms to detect and moderate content in Israel and Palestine in May 2021. However, it was unclear if any of these assessments were part of regular and systematic due diligence on the company's policy enforcement processes and use of algorithmic systems.
The 2025 RDR Index: Big Tech Edition covers policies that were active on August 1, 2024. Policies that came into effect after August 1, 2024, were not evaluated for this benchmark.
Scores reflect the average score across the services we evaluated, with each service weighted equally.
We rank companies on their governance, and on their policies and practices affecting freedom of expression and privacy.
Meta received the highest score in this category among all companies. It had an explicit, clearly articulated policy commitment to freedom of expression and information and the right to privacy. However, it did not make an explicit commitment to human rights in its development and deployment of algorithmic systems, stating only that it is guided by, and recognizes, the importance of human rights in this area (G1). It disclosed board-, executive-, and management-level oversight of how its practices affect human rights (G2). It also provided its employees with training on freedom of expression and privacy issues and had an employee whistleblower program, allowing employees to raise human rights concerns (G3). Further, the company provided relatively strong disclosures about its assessments of risks related to government regulations (G4a). However, it disclosed little information about human rights impact assessments related to its own policy enforcement and its development and use of algorithmic systems (G4b, G4d). It provided no information on whether it assesses risks associated with its targeted advertising policies and practices (G4c). Moreover, the civil rights audit the company conducted in 2020, which included an assessment of discrimination risks associated with its political advertising in the U.S., became outdated. The company provided users with remedy mechanisms covering some freedom of expression complaints for Facebook, Instagram, and Messenger. However, it did not provide remedy mechanisms for WhatsApp or disclose a process for users to submit privacy complaints (G6a).
Meta ranked sixth in the freedom of expression category. It disclosed the types of content and activities it does not allow and its process for enforcing these rules (F3a). It was less transparent about its advertising content and advertisement targeting rules and how it enforces them (F3b, F3c). It published some data abouts its content and account restrictions to enforce its terms of service, but this covered only Facebook and Instagram (F4a,b). It provided no data about advertisements it removed for violating its advertisement content and targeting rules (F4c). Further, the company lacked clarity about its handling of government censorship demands (F5a). While it was transparent about how it responds to such demands for Facebook and Instagram, it disclosed no such process for Messenger and WhatsApp. Its transparency report on government censorship demands did not specify the number of demands the company received and still failed to include data on Messenger and WhatsApp (F6).
Meta ranked fourth in the privacy category, ahead of its U.S. peers Apple, Microsoft, and Google. Although the company disclosed the user information it collects (P3a), it was not transparent about which user information it infers, disclosing only that it infers user interests on Facebook (P3b). It disclosed some of the information it shares with third parties (P4) and provided limited information on its data retention policies (P6) and user options to control their information (P7). Further, it disclosed comprehensive information about its process for responding to government demands for user information (P10a) and published a transparency report that provided some insights into the number of demands it received and its compliance rates (P11a). However, the company disclosed nothing about its processes for responding to private requests for user information and reported no data about these requests (P10b, P11b). The company had a security team in place conducting security audits on its products and services but failed to mention if it commissions third-party security audits (P13). The company was not transparent about how it responds to data breaches (P15).
[1] In November 2024, the EU released the risk assessment reports submitted by very large online platforms, including Facebook and Instagram, as required by the EU Digital Services Act. However, the reports became available after the policy cut-off date and were therefore not taken into consideration as part of this assessment.