Orange phone booth, Central African Republic. Photo by DFID–UK Department for International Development via CC 2.0

In November 2021, telecommunication company TIGO Tanzania was called to testify against the country’s main opposition leader, Freeman Mbowe, during a trial in which the government accused Mbowe of plotting acts of terror. Mbowe and his supporters, meanwhile, have claimed that the trial was politically motivated. When brought to the stand and asked about TIGO’s approach to government demands for user information, the company representative admitted that “compliance with authorities is a higher priority to them than customers’ privacy.”

Wakesho Kililo is the Digital Rights Coordinator for the Greater Internet Freedom Consortium, where she focuses on the Africa region. She observed the testimony unfolding in Tanzania and noted an all too familiar attitude from telecommunications companies across the continent. In East Africa, telecommunications companies have been called out by civil society for their broad and often vague privacy policies, including recently in Uganda. Wakesho hoped that organizations in other parts of the continent could be persuaded to do the same.

Empowering Digital Rights Activists in Central and Southern Africa

It is in this context that RDR partnered with the Internet Freedom Project Lesotho as well as with Paradigm Initiative (PIN), under the auspices of the Greater Internet Freedom Consortium, to support breakthrough new research about the human rights risks posed by local technology companies, through studies carried out in Lesotho, Angola, the Democratic Republic of Congo (DRC), and the Central African Republic (CAR). Both organizations used a selection of indicators from RDR’s 2020 Corporate Accountability Index methodology to evaluate the policies and human rights commitments of digital services and telecommunications companies operating in those four countries.

Paradigm Initiative, a leading civil society organization protecting digital rights across the African continent, focused on three telecommunications companies, one for each country they covered: Unitel in Angola, and two subsidiaries of Orange operating in DRC and CAR. PIN’s goal in completing their report was to help digital rights advocates and researchers in Central and Southern Africa understand existing gaps in company policies and determine on what issues companies need to be pressured to improve.

In Angola, Unitel is the largest mobile operator, with a market share of 80%. In the Democratic Republic of Congo, a laissez-faire approach to the oversight of the telecommunications sector has led to increased market consolidation. After Orange RDC acquired its local subsidiary Tigo in 2016, the company increased its market share to almost 28%. As for the Central African Republic, infrastructure development by telecom operators has been low. With mobile internet penetration at only 27%, the largest operator is Telecel, a subsidiary of Econet Wireless, followed by Orange.

In the Democratic Republic of Congo, telecommunications companies have come under particularly strong criticism for the poor quality of their services and high prices. In July 2020, an Orange RDC user submitted a complaint against the company for infringing on his right to freedom of expression and information because of the obstacles he faced in trying to use data packages he had already acquired and activated. Orange subsidiaries in both DRC and CAR fail to adequately disclose information about the enforcement of their terms of service and don’t publish any information about network management practices. Operators across all three countries were found to have policies and terms of services that provided insufficient protection to users against digital rights violations.

Strengthening Lesotho’s Weak Digital Rights Culture

Lesotho is a small landlocked kingdom surrounded by South Africa. As the authors of Internet Freedom Project Lesotho’s report point out, this means that its human rights culture is strongly influenced by that of surrounding countries. In the case of Lesotho, a “weak human rights culture has resulted in a poor digital rights culture,” according to the report. To make its case, the organization evaluated four technology companies operating inside the country—two telecommunications providers and two financial companies, covering seven services in total. This marks the first time that RDR’s methodology has been used to study services like mobile wallets and online lending, and the first time it’s been used to evaluate financial companies.

People in Lesotho access the internet primarily through mobile prepaid services, as fixed-line broadband penetration is at only 0.2%, much lower than the average of those countries evaluated by PIN. The telecom market is dominated by two main players, who are the focus of the study: Vodacom and Econet. Vodacom Lesotho is a subsidiary of Vodacom Group, based in South Africa (and owned by UK company Vodafone, one of the companies ranked in the RDR Index). Econet Lesotho is a subsidiary of Econet Wireless, operated by Econet Global, based in South Africa. The report also covered two financial companies: Standard Lesotho Bank, a subsidiary of Standard Bank Group, based in South Africa, and Express Credit, a tech company based in Lesotho.

The report found a stark gap between foreign-owned international companies and local ones when it comes to company-wide human rights commitments. Of the four companies evaluated, only those operated by international companies Vodafone Group and Standard Bank Group have human rights policies in place. Although six of the seven services studied have terms of services, they are mostly published in English, making them inaccessible to a majority of the population that speaks Sesotho (the only exception is Vodacom).

The report’s findings on right to privacy were no less encouraging. Fortunately, four out of the seven services do have privacy policies in place. Yet among these companies, many disclose far too little about both what data is collected and how it’s used. For example, none of the companies disclose how they respond to government demands for user information, including from non-judicial procedures or those received from foreign jurisdictions. Vodacom’s parent company, Vodafone, publishes transparency reports with information about their response to government demands, including in Lesotho, but this is not made available on the Vodacom Lesotho website.

Finally, none of the companies evaluated shared any disclosures about their processes for responding to government demands to restrict content or accounts, or about government demands to shut down a network or restrict access to a service. They also failed to commit to net neutrality principles, and provided no explanations about whether they engage in zero-rating practices. “The findings are reflective of the human rights situation in the country. The country has… an entrenched culture of administrative secrecy,” Nthabiseng Pule, the report’s author, explained.

Looking Ahead

The Internet Freedom Project Lesotho hopes that their work will influence regulatory authorities, such as the Lesotho Communications Authority, to better enforce existing protections for users. Meanwhile, PIN hopes that their research will help raise awareness among civil society organizations. To this end, the study’s researchers prepared an advocacy toolkit to help organizations conduct advocacy using their results. Meanwhile, it is clear from both studies that large multinational parent companies, such as Vodafone and Orange, need to be held accountable for ensuring that their subsidiaries make the same commitments on privacy and freedom of expression as their parent companies do for customers in Europe. Most importantly, these studies have highlighted the need for much stronger commitments from companies throughout Central and Southern Africa to respect human rights and avoid contributing to worsening censorship and political turmoil.

These reports join a growing collection of research projects that have adapted our methodology. You can browse all the other adaptations that have been published to date from across the world.

If you’re interested in carrying out your own research using our methods and standards, we want to hear from you! Write to us at partnerships@rankingdigitalrights.org.

In this conversation, RDR’s Global Partnerships Manager Leandro Ucciferri and Senior Editor Sophia Crabbe-Field speak with Jenni Olson, the Senior Director of Social Media Safety at GLAAD, the national LGBTQ media advocacy organization, which campaigns on behalf of LGBTQ people in film, television, and print journalism, as well as other forms of media. 

GLAAD recently expanded their advocacy to protect LGBTQ rights online by holding social media companies accountable through their Social Media Safety Program. As part of this work, Jenni has helped lead the creation, since 2021, of a yearly Social Media Safety Index (SMSI), which reports on LGBTQ social media safety across five major platforms (Facebook, Instagram, Twitter, YouTube, and TikTok). 

The 2022 version of the SMSI, released in July, included a Platform Scorecard developed by GLAAD in partnership with RDR, as well as Goodwin Simon Strategic Research. The Scorecard uses 12 LGBTQ-specific indicators to rate the performance of these five platforms on LGBTQ safety, privacy, and expression. (GLAAD also created a Research Guidance for future researchers interested in using their indicators.)

The main problems identified by the scorecard and SMSI include: inadequate content moderation and enforcement, harmful algorithms, and a lack of transparency, all of which disproportionately hurt LGBTQ people and other marginalized groups. The release of the 2022 SMSI in July brought widespread media attention, including in Forbes and Adweek. At the beginning of August, the project’s lead researcher Andrea Hackl wrote for RDR about her experience adapting our indicators to track companies’ commitments to policies protecting LGBTQ users. 

The following conversation with Jenni touches upon the growth of politically motivated hate and disinformation, how GLAAD tailored RDR’s methodology to review the online experience of the LGBTQ community, the glaring lack of transparency from social media companies, and, following the report’s release, what’s next in the fight for LGBTQ rights online. They spoke on August 25, 2022.

 

Leandro Ucciferri: Thanks so much for being here with us today. We’re really happy to have you. To start off, we wanted to ask you: Why is tackling the role of social media platforms so important in the fight for LGBTQ rights? 

Jenni Olson: I think that clearly social media platforms are so dominant and so important in how we as a society are getting our information and are understanding or not understanding things. The problem of misinformation and disinformation⸺as I always say, another word for that is simply “lies”⸺is really a terrible problem and we find, in our work, that hate and disinformation online about LGBTQ people are really predominant. Obviously we also have things like COVID-related misinformation and monkeypox-related misinfo (which intersects very strongly with anti-LGBTQ hate and misinfo). There’s so much politically motivated misinformation and disinformation especially about LGBT folks, and especially about trans folks and trans youth. We as a community are being targeted and scapegoated. Right-wing media and right-wing politicians are perpetuating horrible hate and disinformation. These have escalated into calls for attacks on our rights, and even physical attacks.

Just in the last couple of months, the Patriot Front showed up at Idaho Pride, the Proud Boys showed up at Drag Queen Story Hour events at public libraries, there have been arsons on gay bars, etc. So much of the hateful anti-LGBT rhetoric and narratives that have led up to these attacks has been perpetuated on social media platforms. 

It’s important to note that we also have these political figures and “media figures,” pundits, who are perpetuating hate and extremist ideas for political purposes, but also for profit. One of the things we see is this grift of, “Oh, here’s all of this hate and these conspiracy theories” that then ends with, “Hey, come subscribe to the channel to get more of this.” These figures are making money, but the even more horrible thing is that the platforms themselves are also profiting from this hate. So the platforms have an inherent conflict of interest when it comes to our requests for them to mitigate that content. Which is frustrating to say the least.

Sophia Crabbe-Field: How much have you seen these trends and this rhetoric amplified in recent years? And was that a motivation for launching the Social Media Safety Index and, more recently, the scorecard?

JO: Things are so bad and getting worse. Right-wing media and pundits just continue to find new and different ways of spreading hateful messages. We did the first Social Media Safety Index last year, in 2021. The idea was to establish a baseline and to look at five platforms to determine what the current state of affairs is and say, “Okay, here’s some guidance, here are some recommendations.” Our other colleagues working in this field make many of the same basic recommendations, even if it’s for different identity-based groups, or different historically marginalized groups; we’re all being targeted in corollary ways. So, for instance, our guidance emphasizes things like fixing algorithms to stop promoting hate and disinfo, improving content moderation (including moderation across all languages and regions), stepping up with real transparency about moderation practices, working with researchers, respecting data privacy—these are things that can make social media platforms safer and better for all of us.

So we did this general guidance and these recommendations, and then met with the platforms. We do have regular ongoing meetings throughout the year on kind of a rapid-response basis; we alert them to things as they come up. They’re all different but some of them ask for our guidance or input on different features or different functionalities of their products. So we thought that for the second edition of the report in 2022, we’ll do scorecards and see how the companies did at implementing our 2021 guidance and, at the same time, we’ll have a more rigorous numeric rating of a set of elements.

What we end up hearing the most about is hate speech, but it’s important to note that LGBTQ people are also disproportionately impacted by censorship.

LU: You came up with your own very specific set of indicators, but inspired by our methodology. Why did you choose to base yourself off of RDR’s methodology and how would you explain the thought process beyond the adaptation?

JO: We had been thinking about doing a scorecard and trying to decide how to go about that. We knew that we wanted to lean on someone with greater expertise. We looked to Ranking Digital Rights as an organization that is so well respected in the field. We wanted to do things in a rigorous way. We connected with RDR and you guys were so generous and amenable about partnering. RDR then connected us with Goodwin Simon Strategic Research, with Andrea Hackl (a former research analyst with RDR) as the lead research analyst for the project. That was such an amazing process and, yes, a lot of work. With Andrea, we went about developing the 12 unique LGBT-specific indicators and then Andrea attended some meetings with leaders at the intersection of LGBT, tech, and platform accountability and honed those indicators a little more and then dug into the research. For our purposes, the scorecard seemed like a really powerful way to illustrate the issues with the platforms and have them measured in a quantifiable way.
Though it’s called the “Social Media Safety Index,” we’re looking not only at safety, but also at privacy and freedom of expression. We developed our indicators by looking at a couple of buckets. The first being hate and harassment policies: Are LGBTQ users protected from hate and harassment? The second area was around privacy, including data privacy. What user controls around data privacy are in place? How are we being targeted with advertising or algorithms? Then the last bucket would be self-expression in terms of how we are, at times, disproportionately censored. Finally, there is also an indicator around user pronouns: Is there a unique pronoun field? Due to lack of transparency, we can’t objectively measure enforcement.


A clip from the 2022 GLAAD Social Media Safety Index Scorecard. 

What we end up hearing the most about is hate speech, but it’s important to note that LGBTQ people are also disproportionately impacted by censorship. We’re not telling the platforms to take everything down. We’re simply asking them to enforce the rules they already have in place to protect LGBTQ people from hate. 

One thing about self-expression: There’s a case right now at the Oversight Board related to Instagram that we just submitted our public comment on. A trans and non-binary couple had posted photos that were taken down “in error” or “unjustly”⸺they shouldn’t have been taken down. That’s an example of disproportionate censorship. And that relates back to another one of the indicators: training of content moderators. Are content moderators trained in understanding LGBTQ issues? And the corresponding recommendations we’ve made are that they should make sure to train their content moderators on LGBTQ-specific issues, they should have an LGBTQ policy lead, and so on. 

SC-F:  When you were putting together the indicators, were you thinking about the experience of the individual user or were you also thinking more so or equally about addressing broader trends that social media platforms perpetuate, including general anti-LGBTQ violence?

JO: I would say it’s a combination of both. The end goal is our vision of social media as a safe and positive experience for everyone, especially those from historically marginalized groups. I think it’s so important to state this vision, to say that it could be like this, it should be like this.

One of the things that we really leaned in on is the policy that Twitter established in 2018 recognizing that targeted misgendering and deadnaming of trans and non-binary people is a form of anti-LGBT and anti-trans hate. It is a really significant enforcement mechanism for them. And so in last year’s report we said everyone should follow the lead of Twitter and add an explicit protection as part of their hate speech policies. We met with TikTok following last year’s SMSI release and, in March of 2021, they added that to their policy as well. This was really great to see and significant when it comes down to enforcement. We continue to press Meta and YouTube on this. Just to be clear, targeted misgendering and deadnaming isn’t when someone accidentally uses someone’s incorrect pronouns. This is about targeted misgendering and deadnaming, which is not only very vicious, but also very popular.

SC-F: Did your research have any focus on policies that affect public-facing figures, in addition to policies geared toward the general user?

JO: In fact, over the last approximately 6-9 months, targeted misgendering and deadnaming has become pretty much the most popular form of anti-trans hate on social media and many of the most prominent anti-LGBTQ figures are really leaning into it.

Some particularly striking examples of its targets include Admiral Rachel Levine, a transgender woman who’s the United States Assistant Secretary for Health. There’s also Lia Thomas, the NCAA swimmer and Amy Schneider, who was recently a champion on Jeopardy. Admiral Levine has been on the receiving end of attacks from the Attorney General of Texas Ken Paxton, as well as from Congresswoman Marjorie Taylor Greene. Most recently, Jordan Peterson, the right-wing extremist, attacked Elliot Page on Twitter, again using malicious targeted misgendering and deadnaming.

Platforms all have different policies related to public figures. In the Jordan Peterson example, Twitter took that down because it’s considered violative of their policy. But the YouTube video where he misgenders and deadnames Elliot Page like 30 times has more than 3 million views. And he’s expressing intense animosity toward trans people. YouTube did at least demonetize that video, as well as another Peterson video which expressed similar anti-trans views and outrageously described gender-affirming care (which is recognized as the standard of care by every major medical association in the U.S.) as “Nazi medical experiment-level wrong.” But they did not take the videos down despite their stated policy of removing content that attacks people based on their gender identity. So we continue to fight on these fronts that are about protecting individual people and the community in general.

If you look at TikTok and you look at Jordan Peterson’s channel on TikTok it’s like he’s a different person (than on YouTube, for example) because TikTok just has a very low threshold for that kind of garbage.

LU:  Were there any unexpected or surprising findings to come out of this year’s research? Especially when we look at the differences in policies between various companies in addressing some of the above-mentioned issues?

JO: We were already pretty familiar with what the different policies are. The issue centers around enforcement and around the company culture or attitude toward enforcement. I think that the most surprising thing to me was actually that all the companies got such low ratings. I expected them not to do well. We know from anecdotal experience that they are failing us on many fronts, but I was really actually surprised at how low their ratings were, that none of them got above a 50 on a scale of 100.

Next year we’ll have new, additional indicators. I think there are some things that aren’t totally captured by the scorecard, but I’m not sure how to capture them. For instance, here’s an anecdotal observation that’s interesting to note about the difference between TikTok and the other platforms. (By the way, sometimes I’ll say a nice thing about one of the platforms and it’s not like I’m saying they’re so great or they’re better than everyone else, there are some things that some are better at, but they all failed and they all need to do better.) If you look at TikTok and you look at Jordan Peterson’s channel on TikTok it’s like he’s a different person (than on YouTube, for example) because TikTok just has a very low threshold for that kind of garbage. It’s clear that TikTok has said, “No you can’t say that stuff.” They’re monitoring the channel in such a way where it’s just not allowed. And, again anecdotally, it feels like Meta and YouTube have a much higher threshold for taking something down. There are nuances to these things. As LGBTQ people, we also don’t want platforms over-policing us.

LU: How hopeful are you that companies will do what needs to be done to create a safer environment given all the incentives they have not to? 

JO: We do this work as a civil society organization and we’re basically saying to these companies: “Hey, you guys, would you please voluntarily do better?” But we don’t have power so the other thing that we’re doing is saying there needs to be some kind of regulatory solution. But, ultimately, there needs to be accountability to make these companies create safer products. Dr. Joan Donovan at the Shorenstein Center has a great piece that I often think about that talks about regulation of the tobacco industry in the 1970s and compares it with the tech industry today, looking at these parallels with how other industries are regulated and how there are consequences if you have an unsafe product. If the Environmental Protection Agency says, “You dumped toxic waste into the river from your industry, you have to pay a $1 billion fine for that,” well then the industry will say, “Okay, we’ll figure out a solution to that, we won’t do that because it’s going to cost us a billion dollars.” The industry is forced to absorb those costs. But, currently, social media and tech in general is woefully under-regulated and we, as a society, are absorbing those costs. Yes, these are very difficult things to solve but the burden has to be on the platforms, on the industry, to solve them. They’re multi-billion dollar industries, yet we’re the ones absorbing those costs.

It’s not like the government is going to tell them: “You can say this and you can’t say that.” That’s not what we’re saying. We’re saying that there has to be some kind of oversight. It’s been interesting to see the DSA [the European Digital Services Act] and to see things happening all over the world that are going to continue to create an absolute nightmare for the companies and they are being forced to deal with that.

Social media and tech in general is woefully under-regulated and we, as a society, are absorbing those costs. Yes, these are very difficult things to solve but the burden has to be on the platforms to solve them.

SC-F: You’ve already touched on the difficulty of measuring enforcement because of the lack of transparency, but from what you’re able to tell, at least anecdotally, do you see a wide schism between the commitments that are made and what plays out in the real-life practices of these companies? 

JO: I have two things to say about that. Yes, we have incredible frustration with inadequate enforcement, including things that are just totally blatant, like the example I just mentioned of YouTube with the Jordan Peterson videos. It was an interesting thing to feel like, on the one hand, it’s a huge achievement that YouTube demonetized those two videos, which means they are saying this content is violative. But it’s extremely frustrating that they will not actually remove the videos. In YouTube’s official statement in response to the release of the SMSI they told NBC News: “It’s against our policies to promote violence or hatred against members of the LGBTQ+ community.” Which quite frankly just feels totally dishonest of them to assert when they are allowing such hateful anti-LGBTQ content to remain active. They say they’re protecting us, but this is blatant evidence that they are not. As a community we need to stand up against the kind of hate being perpetuated by people like Jordan Peterson, but even more so we should be absolutely furious with companies like YouTube that facilitate that hate and derive enormous profits from it.


Source: GLAAD’s 2022 Social Media Safety Index. 

In addition to our advocacy efforts pressing the platforms on their policies there are many other modes of activism we’re trying to be supportive of. The folks at Muslim Advocates have launched a lawsuit against Meta based on the concept of Facebook engaging in false advertising. Meta says that their platforms are safe, that their products are safe, but they’re not. I think it’s being referred to as an “experimental lawsuit.” It’s exciting to me that there are different kinds of approaches to this issue that are being tried out. Another approach is shareholder advocacy, there’s exciting stuff there. There are also things like No Hate at Amazon, an employee group opposing all forms of hate within the company, which actually did a die-in a couple months ago over the company’s sale of anti-trans books. 

SC-F: Do you feel at all hopeful that the scorecard might lead to more transparency that would, eventually, allow for better monitoring of enforcement?

JO: I’m not naïve enough to believe that the companies are just going to read our recommendations and say “Oh wow, thank you, we had no idea, we’ll get right on that, problem solved, we’re all going home.” This kind of work is what GLAAD has done since 1985: create public awareness and public pressure and maintain this public awareness and call attention to how these companies need to do better. There are times when it feels so bad and feels so despairing like, “Oh, we had this little tiny victory but everything else feels like such a disaster.” But then I remind myself: This is why this work is so important. We do have small achievements and we have to imagine what it would be like, how much worse things would be, if we weren’t doing the work. I’m not naïve that this is going to create solutions in simple ways. It is a multifaceted strategy and, as I mentioned a minute ago, it is also really important that we’re working in coalition with so many other civil society groups, including with Ranking Digital Rights. It’s about creating visibility, creating accountability, and creating tools and data out of this that other organizations and entities can use. A lot of people have said, “We’re using your report, it’s valuable to our work.” In the same way that last year we pointed to RDR’s report in our 2021 SMSI report.

A lot of people have said, “We’re using your report, it’s valuable to our work.” In the same way that last year GLAAD pointed to RDR’s report in our 2021 SMSI report.

LU: That’s great. You had this huge impact already with TikTok this past year. In our experience as well, change is slow. But you’re moving the machinery within these companies. Tied to that, and to the impact you’re making with the SMSI, I think our last question for today would be: How do you envision the SMSI being used? Because, on one hand, GLAAD is doing their own advocacy, their own campaigning, but at the same time you’re putting this out there for others to use. Do you hope that the LGBTQ community and end users will use this data on their own? Do you expect that more community leaders and advocates in the field will use this information for their own advocacy? How do you see that playing out in the coming months and complementing the work GLAAD is doing? 

JO: Thanks for saying that we’ve done good work. 

One of the amazing things about the report is that it came out about three, four weeks ago, and we got incredible press coverage for it. And that’s just so much of a component: public awareness and people understanding that this is all such an enormous problem. But also building public awareness in the LGBTQ community. Individual activists and other colleagues being able to cite our work is very important: cite our numbers, cite the actual recommendations, cite the achievements.

It does feel really important that GLAAD took leadership on this. I was brought on two years ago as a consultant on the first Social Media Safety Index and we saw that this work was not being done. That was the first ever “Okay, let’s look at this situation and establish this baseline and state that this is obviously a problem, and here’s the big report.”

But at the same time, I just have a lot of humility because there are so many people doing this work. There are so many individual activists that inspire me, it’s so moving. Do you know who Keffals is? She’s a trans woman, a Twitch streamer in Canada. She does such amazing activism, such amazing work — especially standing up as a trans activist online and on social media, and just this past weekend she was swatted. Swatting is when someone maliciously calls the police and basically reports a false situation to trigger a SWAT team being sent to a person’s house. So they showed up at her house and busted down her door and put a shotgun in her face, terrifying her and taking her stuff and taking her to jail; it’s just horrifying. And, like doxing, it’s another common and terrible form of anti-LGBTQ hate unique to the online world which manifests in real-world harms. She just started talking about this yesterday and it’s all over the media. She’s putting herself on the line in this way and being so viciously attacked by anti-trans extremists Anyway, it’s so powerful for people like her to be out there, courageously being who they are, and they deserve to be safe. I’m grateful that we get to do this work as an organization and that it’s useful to others. And I’m just humbled by the work of so many activists all over the world. 

If you’re a researcher or advocate interested in learning more about our methodology, our team would love to talk to you! Write to us at partnerships@rankingdigitalrights.org.

In its response to our letter campaign with Access Now, Meta takes issue with aspects of its score in RDR’s 2022 Big Tech Scorecard. Here’s why we stand by our results.

Ranking Digital Rights wishes to address Meta’s response to the letter campaign led by Access Now, in coordination with RDR. As part of this campaign, Meta, along with all the companies we ranked in our 2022 Big Tech Scorecard, was asked to make one improvement on their human rights performance. This year, Access Now called on Meta to be more transparent about government censorship demands, particularly those targeting WhatsApp and Facebook Messenger. While several companies issued responses, Meta’s was unique in raising questions about RDR’s standards and findings.

Meta’s response made a number of claims that we have decided to address directly below.

  1. Meta’s claim: RDR’s standards are unattainable.

    What our data says: Meta notes that “it’s important that there be ambitious goals…but also that at least some of these be attainable.” Yet all of the goals set forth in RDR’s indicators are attainable. However, they require that corporate leadership dedicate time and willpower to fulfilling them. For example, when the inaugural RDR Index was released in 2015, none of the ranked companies disclosed any data on what content and accounts they restricted for breaching their own rules. As of our latest Scorecard, companies that do not disclose this information are quickly becoming the outlier. Similarly, even companies that already score well can make considerable progress from year to year.

  2. Meta’s claim: The Big Tech Scorecard doesn’t give points for publishing the results of human rights due diligence processes.

    What our data says: Meta claims that the Scorecard does not consider “criteria related to communicating insights and actions from human rights due diligence to rights holders.” It is true that our human rights impact assessment (HRIA) indicators focus on procedural transparency rather than simply the publication of results. We do recognize that Meta has coordinated with reputable third parties such as BSR and Article One Advisors to publish several abbreviated country-level assessments as well as to guide its work on expanding encryption. However, it has yet to demonstrate the same degree of transparency on issues that are fundamental to how it operates, including targeted advertising and algorithms. In addition, its country-level assessments have notable gaps. Human rights groups have raised serious questions about the lack of information Meta shared from its India HRIA in its inaugural human rights report. This HRIA was meant to evaluate the company’s role in spreading hate speech and incitement to violence in that country. Societies where Meta has a powerful and rapidly growing presence deserve more than a cursory view of the company’s impact, especially when Meta is being directly linked to such explicit human rights harms.

  3. Meta’s claim: RDR should have given Meta a higher score for its purported commitment to human rights standards in the development of AI.

    What our data says: Meta points to its Corporate Human Rights Policy, arguing that it “clearly specifies how human rights principles guide Meta’s artificial intelligence (AI) research and development” and questioning why our Scorecard “indicate[s] [Meta] do[es] not commit to human rights standards in AI development.” The problem is: Meta’s human rights “commitment” on AI falls short of actually committing. Our findings acknowledge an implied commitment to these standards (which equates to partial credit). For example, their policy states that human rights “guide [Meta’s] work” in developing AI-powered products and that Meta “recognize[s] the importance of” the OECD Principles on Artificial Intelligence. We encourage Meta to make its commitment to human rights in the development and use of AI an explicit one.

  1. Meta’s claim: RDR unfairly expects “private messaging” services to meet the same transparency standards as other services.

    What our data says: By inquiring about the factors RDR considers when “requir[ing] private messaging services, including encrypted platforms, to conform to the same transparency criteria as social media platforms,” Meta seems to be implying that we do not understand how their products work or that our indicators are not fit for purpose with respect to so-called “private messaging” services like Messenger and WhatsApp.

    To start with, Facebook Messenger, the more popular of the two apps in the U.S., is not even an encrypted communications channel (at least not yet). Meanwhile, many users are not fully aware of how “private” (or not) a messaging service is when they sign up for it. There is abundant evidence that Meta monitors Messenger conversations, ostensibly for violative content, but the precise mix of human and automated review involved remains a mystery. As efforts to strip people of their reproductive rights continue to grow, Meta has a responsibility to shine a light on government demands for users’ messages and information. Law enforcement in U.S. states where abortion is now illegal have successfully obtained Messenger chats that eventually led to criminal charges. Finally, even for encrypted platforms like WhatsApp, our standards call for companies to be as transparent as possible regarding automated filtering, account restrictions, and other enforcement actions. Transparency on such basic protocols shouldn’t be too big of an ask.

Meta also notes its plan to build out its disclosures on government demands for content restrictions. This is an encouraging sign. In particular, Meta announced that it plans to publish data on content that governments have flagged as violating the company’s Community Standards—a tactic governments often use to strong-arm companies into compliance without due process. It also committed to start notifying users when content is taken down for allegedly violating a law. Our indicators have long called for companies to enact these two measures. Still, much work remains, not all of which is reflected in Meta’s plans.

The issues Meta has expressed about how our standards pertain, in this case, to transparency on government censorship demands. This means that our most fundamental concern about Meta’s human rights record remains unaddressed: The company’s business model still relies almost entirely on targeted advertising. Meta does not report on the global human rights impacts of its targeting systems and publishes no data on how it enforces its advertising policies. These omissions are unjustifiable. There is widespread agreement that a business model powered by mountains of user data generates harmful incentives and ultimately leads to human rights harms. Even Meta’s shareholders are vigorously supporting calls to assess these harms, only to be stymied by Mark Zuckerberg’s inflated decision-making power.

Without addressing the problems that lie at the root of many of its human rights impacts or recognizing the need for systemic change, Meta will continue to “nibble around the edges,” as shareholders have argued in recent calls to action. Along with our allies, RDR will continue to push Meta and other Big Tech companies to achieve the standards needed to uphold human rights. We do so with the knowledge that what we are asking for from companies is not only fully achievable, but also very much essential. Meta can do better; they just have to commit to try.


Investor advocacy and shareholder action on human rights topics have reached unprecedented levels this year. Five of the largest global tech companies—Alphabet (Google), Amazon, Apple, Meta, and Twitter—all faced a record number of shareholder resolutions. The rise of investing based on environmental, social, and governance (ESG) factors has been a key driver of this trend. Civil society groups like RDR are providing the human rights standards and stories that allow investors to evaluate their holdings’ commitment to a better tomorrow.

Today we are proud to meet the evolving needs of investors with an update to our Investor Guidance page. Working with the investor community is in RDR’s DNA. We developed our very first Corporate Accountability Index in 2015 in partnership with leading ESG research provider Sustainalytics. Our work since then has been suffused with investor partnerships aimed at better protecting human rights. Today’s update illuminates how our work with shareholders has evolved in light of the surge of ESG-driven investor engagement and what digital rights topics have emerged as key investor priorities.

  • First, we are publishing more details about the impact of our work with investors. This includes joint undertakings with individual asset managers, but also sweeping projects like the Digital Rights Engagement Initiative, coordinated by the Investor Alliance for Human Rights. The initiative consists of coordinated outreach to individual companies by the 177 signatories of the Investor Statement on Corporate Accountability for Digital Rights, which calls on companies to report on their progress on digital rights and is based on RDR’s standards.
  • Second, we are updating and enriching our shareholder resolutions data with information about the outcomes of each resolution, including the result of the final vote. We are also bringing together stories about the direct and indirect impact of these votes: news reports, company announcements, and new campaigns inspired by each resolution. With this update, we are also marking resolutions that cite RDR and those whose development we supported directly.
  • Finally, we are creating a separate “Spotlight” space highlighting insights from members of the RDR team on topics we consider critical to both shareholders’ rights and to our human rights-based mission. Our inaugural Spotlight is our mini-report on bringing down barriers to shareholder advocacy.

Delving into the new data we are publishing today highlights noteworthy trends in investor behavior. Shareholders are revealing an increasingly nuanced understanding of the human rights impact of companies’ existing and emerging operations. Alphabet (Google), for instance, faced a petition this year calling on the tech giant to assess the impacts of its plans to build data centers in human rights hotspots such as Saudi Arabia. At Amazon, resolutions calling out the human rights violations enabled by its facial recognition and surveillance products continued to gain traction, winning a robust 40% of shareholder votes. Calls at both Meta and Google to terminate their multi-class share structures, which allow powerful executives to artificially dilute majority support for such resolutions, won near-unanimous support from independent shareholders, setting an all-time record.

Meanwhile, RDR’s involvement in shareholder resolutions has also evolved in the past two years: from providing data points to directly shaping them alongside activist investors. Our Scorecards provide a balanced assessment of more than two dozen companies. Where we see a company’s disclosures on a key topic persistently lagging behind, we help forge collective efforts to push them to improve. This year, we helped craft a proposal that called on Meta to assess the human rights impacts of its targeted ad-based business model, which won the support of over 70% of independent shareholders. We also worked with shareholders to call for an assessment of Google’s FLoC ad technology, which likely influenced the company’s decision to terminate the program.

Improving the behavior of powerful actors requires persistent effort over time. This understanding is baked into our research and rankings, which track the yearly ebb and flow  of companies’ disclosures about how they protect users’ rights. It is baked into our policy engagement, which provides guidance for lawmakers to shape new legislation. It is baked into our work with advocacy partners around the world who adapt our methodology to create new windows of scrutiny. And it is baked into our collaboration with investors, who represent an increasingly powerful source of pressure on companies to act responsibly. We strive to connect these streams whenever possible.

Civil society watchdogs, the responsible investor community, and those working to reform companies from within share a common goal: strengthening corporate accountability and protecting human rights. Today’s update is one more step toward bringing these communities together and showing how their common goal can be achieved.

Ranking Digital Rights has once again partnered with Global Voices Translation Services to translate the executive summary of the 2022 Big Tech Scorecard into six major languages: Arabic, Chinese, French, Korean, Russian, and Spanish!

The RDR Big Tech Scorecard evaluates 14 companies, whose products and services are used by over four billion people worldwide, in all kinds of cultures and contexts. The languages of our translations represent the most commonly spoken languages in the countries where the companies we rank are located, and therefore reflect the global nature of our work.

Key components of our 2020 methodology, including our 2020 revisions, are also available in Spanish, French, and Arabic. This document provides a great practical tool for anyone around the world who wishes to use and adapt our standards and build unique advocacy campaigns adapted to their goals and local contexts.

For example, over the past year, civil society organizations in West and Southern Africa, as well as South and Southeast Asia, have adapted our methodology to study local tech sectors in a total of 10 countries. Projects carried out by Paradigm Initiative in Angola, the DRC, and the Central African Republic, and by EngageMedia in six Southern Asian countries, used our standards to evaluate local telecommunication companies. Meanwhile, the Internet Freedom Project Lesotho evaluated financial services, in addition to telcos. Making our resources available in multiple languages is therefore a key part of our strategy to expand the reach and impact of our rankings and standards.

With these translations, we hope to support broader advocacy actions that can leverage this data and analysis to hold more companies accountable for policies that better respect people’s human rights online.

Translations for the Telco Giants 2022 Scorecard—forthcoming this fall—will cover 12 companies based in 12 different jurisdictions: Spain, the UK, the U.S., Norway, Germany, France, South Africa, Mexico, Malaysia, India, UAE, and Qatar. You can also visit our translations page for translations from previous years.

It takes a village: We thank Global Voices for their work on the translations, as well as our regional partners for their help in reviewing and promoting these materials!

Get in touch: If you’re a researcher or advocate interested in learning more about our methodology, our team would love to talk to you! Write to us at info@rankingdigitalrights.org.