Meta’s First Human Rights Report: The Good, the Bad, and the Missing

Share Article

Meta released its first-ever “Annual” Human Rights Report last week, looking at the company’s purported progress toward meeting its human rights obligations from 2020 through 2021. This release follows years of criticism from civil society about Meta’s failure to act in response to accusations of ignoring online abuses that have led to real-world abuse across the globe. We at Ranking Digital Rights have consistently highlighted this pressing need, including in our latest Meta Scorecard. In fact, we urged the company then known as Facebook to act all the way back in 2015 when we published our first RDR Index, and we’ve continued to do so ever since.

This report, in other words, has been a long time coming. Although we’re glad it finally arrived, we would have liked to have seen a greater acknowledgement of the existing policies and incentive structures that are stifling the company’s ability to better respond to human rights issues. We’re hoping that the briefing Meta promised as a follow-up to the report will provide us with an opportunity to continue engaging on these issues. Below we dig into the report, and look at the good, the bad, and what’s missing.

First, the good:

  • The report exists!: The fact that the heat on Meta was strong enough to compel the company to produce this report, which makes explicit references to international human rights norms and instruments, is a start. Better late than never!
  • Recognition that rights extend beyond users: Although it’s important that tech companies respect the human rights of their users, the impact of Facebook’s activities reaches far beyond them. Thankfully, this report recognizes that rights-holders include “not only users of our platforms and services, but the many others whose rights were potentially impacted by online activity and conduct.” 
  • Facebook’s Trusted Partners program: Meta discloses the use of “trusted partners,” which includes “over 400 non-governmental organizations, humanitarian agencies, human rights defenders and researchers from 113 countries around the globe.” The stated goal of this program is to help Facebook understand the impact of its policies on at-risk users. While there are good reasons to keep the full list of partner organizations confidential, the company should provide much more information on how this program actually works. Although this is a positive development in theory, there’s far too little transparency about this program to really evaluate it. 

Now, the bad. And there’s unfortunately a good deal of that, with Meta making a whole lot of meaningless and misleading claims in this report

  • Meta starts off the report with its mission statement: “[T]o give people the power to build community and bring the world closer together.” Somehow, according to Meta, this statement is supposed to “strongly” align itself with “human rights principles.” How exactly is that? We’re not so sure. 
  • Next, Meta pays lip service to the concept of a “universal obligation to non-discrimination” as part of their “vision and strategy”: But they want to do this without recognizing that the targeted advertising business model inherently enables and automates discrimination based on demographic and behavioral data. Nor does the report grapple with the discrimination resulting from the uneven way Meta allocates resources toward content moderation in different languages. 
  • Meta, in its own words, is a “mission-driven company where employees are typically aligned with human rights norms. In turn, this consensus leads to a company-wide community that wants to protect and advance human rights.” But there’s no evidence for this claim—we’re supposed to just take the company at its word. And, once again, Meta is making this statement despite using a business model that, as we’ve been saying for years!, is grounded in the violation of the right to privacy. 
  • Ad-policy enforcement barely makes it into the report: Although the company makes over 98 percent of its money from advertising, a discussion of the effects of Meta’s ad content and systems is almost completely absent from its “human rights impact assessments.” And this despite the fact that about 80 percent of Meta shareholders voted this year for a human rights impact assessment (HRIA) of the company’s ad-targeting practices. According to the report, Meta created new AI classifier systems, which they say will allow them  “to enforce bans on violating ads and commerce listings for certain medical products.” This seems to be the only reference to ads throughout the report. (It should be noted that Meta does not release any data whatsoever on how it moderates ads, despite accounting for almost a quarter of all digital ad spending in the United States.) Are we really supposed to believe that surveillance advertising has no impact on the rights to privacy, free expression, and non-discrimination? Meta clearly wants us to think so, but we’re not buying it.
  • Is this really all the human rights due diligence Meta did in two years?: It’s not clear whether Meta has conducted human rights due diligence in countries beyond the ones mentioned (Cambodia, Indonesia, the Philippines, Sri Lanka, and India), or on product features other than end-to-end encryption and Ray-Ban Stories. If not, then why not? If they have, then why are these the only evaluations included in the report? In particular, as many other civil society organizations have pointed out, the full HRIA from India should be made public (allowing for redactions needed to protect civil society actors). We also expected to see a discussion of human rights due diligence around the so-called “metaverse” (which we didn’t).
  • Meta’s Human Rights Policy Team, which was responsible for this report, counted four full-time staff at the end of 2021. A team of only four seems far too small to be able to properly investigate the human rights policy of a company of the size and scope of Meta, even if many other roles also touch on human rights. (Contrast this number with the armies of lobbyists Meta employs around the world.)

Finally, there are a few things altogether missing that really should be there. 

  • There’s no mention whatsoever of the uneven enforcement of content policies across regions, countries, and languages. There is some mention of AI-driven content moderation, but no acknowledgment that these systems are much more advanced for some languages (like English) than others, and don’t exist at all for many others. Meta also makes a vague promise to have “improve[d] our moderation across languages by adding more expertise,” but doesn’t say anything about how this affects its ability to moderate effectively or how human rights are impacted. 
  • Content moderators: There is no mention of the labor rights of Meta’s moderators. The company has already been the subject of a lawsuit over their working conditions from an ex-moderator in Kenya
  • There is no mention of any attempts at data minimization or purpose limitation—two bedrock principles of data protection that are fundamental to the human right to privacy. This is not surprising, given Meta’s voracious appetite for data collection and insistence that its very existence is “in line with human rights principles.”

Again, we’re glad that Meta felt compelled to put out this report and recognized the need to commit to a human rights policy, something we’ve been calling for. Most large tech companies do not produce a human rights report at all. But beyond this, the report fails to actually address the causes of the online abuses that pushed civil society to demand action from Meta in the first place. Many of the issues we’ve highlighted in our past Scorecards, including insufficient attention to content moderation policies, were wholly missing. Furthermore, there isn’t much indication in this report that the company will do what’s needed to address its lack of adherence to human rights principles. But how could there be? The first step to solving a problem is admitting that there is one.

Highlights

A decade of tech accountability in action

Over the last decade, Ranking Digital Rights has laid the bedrock for corporate accountability in the tech sector by demanding transparency from both Big Tech and Telco Giants.

RDR Series:
Red Card on Digital Rights

A story of control, censorship, and state surveillance during the FIFA World Cup in Qatar

Related Posts

Sign up for the RADAR

Subscribe to our newsletter to stay in touch!