Meta Can Do Better, If They Try

Share Article

In its response to our letter campaign with Access Now, Meta takes issue with aspects of its score in RDR’s 2022 Big Tech Scorecard. Here’s why we stand by our results.

Ranking Digital Rights wishes to address Meta’s response to the letter campaign led by Access Now, in coordination with RDR. As part of this campaign, Meta, along with all the companies we ranked in our 2022 Big Tech Scorecard, was asked to make one improvement on their human rights performance. This year, Access Now called on Meta to be more transparent about government censorship demands, particularly those targeting WhatsApp and Facebook Messenger. While several companies issued responses, Meta’s was unique in raising questions about RDR’s standards and findings.

Meta’s response made a number of claims that we have decided to address directly below.

  1. Meta’s claim: RDR’s standards are unattainable.

    What our data says: Meta notes that “it’s important that there be ambitious goals…but also that at least some of these be attainable.” Yet all of the goals set forth in RDR’s indicators are attainable. However, they require that corporate leadership dedicate time and willpower to fulfilling them. For example, when the inaugural RDR Index was released in 2015, none of the ranked companies disclosed any data on what content and accounts they restricted for breaching their own rules. As of our latest Scorecard, companies that do not disclose this information are quickly becoming the outlier. Similarly, even companies that already score well can make considerable progress from year to year.

  2. Meta’s claim: The Big Tech Scorecard doesn’t give points for publishing the results of human rights due diligence processes.

    What our data says: Meta claims that the Scorecard does not consider “criteria related to communicating insights and actions from human rights due diligence to rights holders.” It is true that our human rights impact assessment (HRIA) indicators focus on procedural transparency rather than simply the publication of results. We do recognize that Meta has coordinated with reputable third parties such as BSR and Article One Advisors to publish several abbreviated country-level assessments as well as to guide its work on expanding encryption. However, it has yet to demonstrate the same degree of transparency on issues that are fundamental to how it operates, including targeted advertising and algorithms. In addition, its country-level assessments have notable gaps. Human rights groups have raised serious questions about the lack of information Meta shared from its India HRIA in its inaugural human rights report. This HRIA was meant to evaluate the company’s role in spreading hate speech and incitement to violence in that country. Societies where Meta has a powerful and rapidly growing presence deserve more than a cursory view of the company’s impact, especially when Meta is being directly linked to such explicit human rights harms.

  3. Meta’s claim: RDR should have given Meta a higher score for its purported commitment to human rights standards in the development of AI.

    What our data says: Meta points to its Corporate Human Rights Policy, arguing that it “clearly specifies how human rights principles guide Meta’s artificial intelligence (AI) research and development” and questioning why our Scorecard “indicate[s] [Meta] do[es] not commit to human rights standards in AI development.” The problem is: Meta’s human rights “commitment” on AI falls short of actually committing. Our findings acknowledge an implied commitment to these standards (which equates to partial credit). For example, their policy states that human rights “guide [Meta’s] work” in developing AI-powered products and that Meta “recognize[s] the importance of” the OECD Principles on Artificial Intelligence. We encourage Meta to make its commitment to human rights in the development and use of AI an explicit one.

  1. Meta’s claim: RDR unfairly expects “private messaging” services to meet the same transparency standards as other services.

    What our data says: By inquiring about the factors RDR considers when “requir[ing] private messaging services, including encrypted platforms, to conform to the same transparency criteria as social media platforms,” Meta seems to be implying that we do not understand how their products work or that our indicators are not fit for purpose with respect to so-called “private messaging” services like Messenger and WhatsApp.

    To start with, Facebook Messenger, the more popular of the two apps in the U.S., is not even an encrypted communications channel (at least not yet). Meanwhile, many users are not fully aware of how “private” (or not) a messaging service is when they sign up for it. There is abundant evidence that Meta monitors Messenger conversations, ostensibly for violative content, but the precise mix of human and automated review involved remains a mystery. As efforts to strip people of their reproductive rights continue to grow, Meta has a responsibility to shine a light on government demands for users’ messages and information. Law enforcement in U.S. states where abortion is now illegal have successfully obtained Messenger chats that eventually led to criminal charges. Finally, even for encrypted platforms like WhatsApp, our standards call for companies to be as transparent as possible regarding automated filtering, account restrictions, and other enforcement actions. Transparency on such basic protocols shouldn’t be too big of an ask.

Meta also notes its plan to build out its disclosures on government demands for content restrictions. This is an encouraging sign. In particular, Meta announced that it plans to publish data on content that governments have flagged as violating the company’s Community Standards—a tactic governments often use to strong-arm companies into compliance without due process. It also committed to start notifying users when content is taken down for allegedly violating a law. Our indicators have long called for companies to enact these two measures. Still, much work remains, not all of which is reflected in Meta’s plans.

The issues Meta has expressed about how our standards pertain, in this case, to transparency on government censorship demands. This means that our most fundamental concern about Meta’s human rights record remains unaddressed: The company’s business model still relies almost entirely on targeted advertising. Meta does not report on the global human rights impacts of its targeting systems and publishes no data on how it enforces its advertising policies. These omissions are unjustifiable. There is widespread agreement that a business model powered by mountains of user data generates harmful incentives and ultimately leads to human rights harms. Even Meta’s shareholders are vigorously supporting calls to assess these harms, only to be stymied by Mark Zuckerberg’s inflated decision-making power.

Without addressing the problems that lie at the root of many of its human rights impacts or recognizing the need for systemic change, Meta will continue to “nibble around the edges,” as shareholders have argued in recent calls to action. Along with our allies, RDR will continue to push Meta and other Big Tech companies to achieve the standards needed to uphold human rights. We do so with the knowledge that what we are asking for from companies is not only fully achievable, but also very much essential. Meta can do better; they just have to commit to try.

Highlights

A decade of tech accountability in action

Over the last decade, Ranking Digital Rights has laid the bedrock for corporate accountability in the tech sector by demanding transparency from both Big Tech and Telco Giants.

RDR Series:
Red Card on Digital Rights

A story of control, censorship, and state surveillance during the FIFA World Cup in Qatar

Related Posts

Sign up for the RADAR

Subscribe to our newsletter to stay in touch!