Cross-checking Facebook: Five Lies Revealed by Frances Haugen

Share Article

London street art. Photo by Annie Spratt. Licensed for non-commercial reuse by Unsplash.

Written and compiled by Alex Rochefort, Zak Rogoff, and RDR staff.

The revelations of Facebook whistleblower Frances Haugen, published in SEC filings and in the Wall Street Journal’s “Facebook Files” series, have brought forth irrefutable evidence that Facebook has repeatedly misled or lied to the public, and that it routinely breaks its own rules, especially outside the U.S.

Corroborating years of accusations and grievances from global civil society, the revelations beg the question: What do Facebook’s policies really tell us about how the platform works? The documents offer us a rare opportunity to cross-check the company’s public commitments against its actual practices—and our own research findings of the past six years.

 

How does Facebook handle hate speech?

What Facebook says publicly:

In 2020, Facebook claimed that it proactively removes 95% of posts that its systems identify as hate speech. The remaining 5% are flagged by users and removed on review by moderators.

What the Facebook files prove:

Facebook estimates that it takes action on “as little as 3-5% of hate speech” because of limitations in its automated and human content moderation practices. The company does not remove “95% of hate speech” that violates its policies.

What we know:

While not technically contradictory, these statements are emblematic of a longstanding strategy by Facebook to obfuscate and omit information in transparency reports and other public statements. These statements reinforce what we’ve found in our research. While we have noted that Facebook’s policies clearly outline what content is prohibited and how it enforces its rules, the company does not publish data to corroborate this. Without this data, it is impossible for researchers to verify that the company does what it says it will do.

Our most recent Facebook company report card highlights the company’s failure to be fully transparent about its content moderation practices. Carrying out content moderation at scale is a complex challenge. But providing more transparency about content moderation practices is not. See our 2020 data on transparency reporting.

 

How does Facebook handle policy enforcement when it comes to human rights violations around the world?

What Facebook says publicly: The company says it takes seriously its role as a communication service for the global community. In a 2020 Senate hearing CEO Mark Zuckerberg noted that the company’s products “enabled more than 3 billion people around the world to share ideas, offer support, and discuss important issues” and reaffirmed a commitment to keeping users safe.

What the Facebook files prove: Facebook allocates 87% of its budget for combating misinformation to issues and users based in the U.S., even though these users make up just about 10% of the platform’s daily active users. These policy choices have exacerbated the spread of hate speech and misinformation in non-Western countries, undermined efforts to moderate content in regions where internal conflict and political instability are high, and contributed to the spread of offline harm and ethnic violence.

What we know: The Haugen revelations corroborate what civil society and human rights activists have been calling attention to for years—Facebook is insufficiently committed to protecting its non-Western users. Across the Global South, the company has been unable—or unwilling—to adequately assess human rights risks or take appropriate actions to protect users from harm. This is especially concerning in countries where Facebook has a de facto monopoly on communications services thanks to its zero-rating practices.

In our 2020 research, Facebook was weak on human rights due diligence, and failed to show clear evidence that it conducts systematic impact assessments of its algorithmic systems, ad targeting practices, or processes for enforcing its Community Standards. The company often points to its extensive Community Standards as evidence that it takes seriously its responsibility to protect people from harm. But we now have proof that these standards are selectively enforced, in ways that reinforce existing structures of power, privilege, and oppression. See our 2020 data on human rights impact assessments for algorithmic systems and zero-rating.

 

How does Facebook handle policy enforcement for high-profile politicians and celebrities?

What Facebook says publicly: Facebook has wavered on the question of whether and how to treat speech coming from high-profile public figures, citing exceptions to its typical content rules on the basis of “newsworthiness.” But in June 2021, the company said that it had reined in these exceptions at the recommendation of the Facebook Oversight Board. A blog post about the shift asserted: “we do not presume that any person’s speech is inherently newsworthy, including by politicians.”

What the Facebook files prove: Facebook maintains a special program, known as XCheck (or “cross-check”) that exempts high-profile users, such as politicians and celebrities, from the platform’s content rules. A confidential internal review of the program stated the following: “We are not actually doing what we say we do publicly….Unlike the rest of our community, these people can violate our standards without any consequences.”

What we know: We know that speech coming from high-profile people, especially heads of state, can have a significant impact on what people believe is true or false, and what they feel comfortable saying online. Facebook maintains an increasingly detailed set of Community Standards describing what kinds of content is and is not allowed on its platform, but as our data over the years has shown, the company has long failed to show evidence (like transparency reports) proving that it actually enforces these rules. What are the human rights consequences of creating a two-tiered system like XCheck? Our governance data also shows that Facebook’s human rights due diligence processes hardly scratch the surface of this question.

 

Does Facebook prioritize growth over democracy and the public interest?

What Facebook says publicly: In a 2020 Facebook post, Mark Zuckerberg announced several Facebook policy changes meant to safeguard the platform against threats to the U.S. election, including a ban on political and issue ads, steps to reduce misinformation from going viral, and “strengthened enforcement against militias, conspiracy networks like QAnon, and other groups that could be used to organize violence or civil unrest…”

What the Facebook files prove: These measures stayed in place during the election, but were quickly dissolved after the election because they undermined “virality and growth on its platforms.” Other interventions that might have reduced the spread of violent or conspiracist content around the 2020 U.S. election were rejected by Facebook executives out of fear they would reduce user engagement metrics. Facebook whistleblower Haugen says the company routinely chooses platform growth over safety.

What we know: We know that Facebook’s systems for moderating both organic and ad content, as well as ad targeting, have a tremendous impact on what information people see in their feeds, and what they consequently believe is true. This means that Facebook plays a role in influencing people’s decisions about who to vote for. The company has failed to publish sufficient information about how it moderates these types of content. And while it has published some policies and statements on these processes, Haugen and others have proven that these statements are not always true. See our 2020 data on algorithmic transparency and rule enforcement related to advertising, ad targeting, and organic content.

 

Does Facebook knowingly profit from disinformation?

What Facebook says publicly: In a 2021 House hearing, Mark Zuckerberg deflected the suggestion from Congressman Bill Johnson, a Republican from Ohio, that Facebook has profited from the spread of disinformation.

What the Facebook files prove: Facebook profits from all of the content on its platform. Its algorithmically-fueled, ad-driven business model requires that users stay active on the platform in order to make money from ads.

What we know: As we’ve said before, the company has never been sufficiently transparent about how it builds or uses algorithms.

Automated tools are essential to social media platforms’ content distribution and filtering systems. They are also integral to platforms’ surveillance-based business practices. Yet Facebook (and its competitors) publish very little about how its algorithms and ad targeting systems are designed or governed — our 2020 research showed just how opaque this space really is. Unchecked algorithmic content moderation and ad targeting processes raise significant privacy, freedom of expression, and due process concerns. Without greater transparency around these systems, we cannot hold Facebook accountable to the public. See our 2020 data on human rights impact assessments for targeted advertising and algorithmic systems.

Facebook’s business model lies at the heart of the company’s many failures. Despite the range of harms it brings to people’s lives and rights, Facebook has continued its relentless pursuit of growth. Growth drives advertising, and ad sales account for 98 percent of the company’s revenue. Unless we address these structural dynamics — starting with comprehensive federal privacy legislation in the U.S. — we’ll be treating these symptoms forever, rather than eradicating the disease.

Highlights

A decade of tech accountability in action

Over the last decade, Ranking Digital Rights has laid the bedrock for corporate accountability in the tech sector by demanding transparency from both Big Tech and Telco Giants.

RDR Series:
Red Card on Digital Rights

A story of control, censorship, and state surveillance during the FIFA World Cup in Qatar

Related Posts

Sign up for the RADAR

Subscribe to our newsletter to stay in touch!