In this conversation, RDR’s Global Partnerships Manager Leandro Ucciferri and Senior Editor Sophia Crabbe-Field speak with Jenni Olson, the Senior Director of Social Media Safety at GLAAD, the national LGBTQ media advocacy organization, which campaigns on behalf of LGBTQ people in film, television, and print journalism, as well as other forms of media. 

GLAAD recently expanded their advocacy to protect LGBTQ rights online by holding social media companies accountable through their Social Media Safety Program. As part of this work, Jenni has helped lead the creation, since 2021, of a yearly Social Media Safety Index (SMSI), which reports on LGBTQ social media safety across five major platforms (Facebook, Instagram, Twitter, YouTube, and TikTok). 

The 2022 version of the SMSI, released in July, included a Platform Scorecard developed by GLAAD in partnership with RDR, as well as Goodwin Simon Strategic Research. The Scorecard uses 12 LGBTQ-specific indicators to rate the performance of these five platforms on LGBTQ safety, privacy, and expression. (GLAAD also created a Research Guidance for future researchers interested in using their indicators.)

The main problems identified by the scorecard and SMSI include: inadequate content moderation and enforcement, harmful algorithms, and a lack of transparency, all of which disproportionately hurt LGBTQ people and other marginalized groups. The release of the 2022 SMSI in July brought widespread media attention, including in Forbes and Adweek. At the beginning of August, the project’s lead researcher Andrea Hackl wrote for RDR about her experience adapting our indicators to track companies’ commitments to policies protecting LGBTQ users. 

The following conversation with Jenni touches upon the growth of politically motivated hate and disinformation, how GLAAD tailored RDR’s methodology to review the online experience of the LGBTQ community, the glaring lack of transparency from social media companies, and, following the report’s release, what’s next in the fight for LGBTQ rights online. They spoke on August 25, 2022.

 

Leandro Ucciferri: Thanks so much for being here with us today. We’re really happy to have you. To start off, we wanted to ask you: Why is tackling the role of social media platforms so important in the fight for LGBTQ rights? 

Jenni Olson: I think that clearly social media platforms are so dominant and so important in how we as a society are getting our information and are understanding or not understanding things. The problem of misinformation and disinformation⸺as I always say, another word for that is simply “lies”⸺is really a terrible problem and we find, in our work, that hate and disinformation online about LGBTQ people are really predominant. Obviously we also have things like COVID-related misinformation and monkeypox-related misinfo (which intersects very strongly with anti-LGBTQ hate and misinfo). There’s so much politically motivated misinformation and disinformation especially about LGBT folks, and especially about trans folks and trans youth. We as a community are being targeted and scapegoated. Right-wing media and right-wing politicians are perpetuating horrible hate and disinformation. These have escalated into calls for attacks on our rights, and even physical attacks.

Just in the last couple of months, the Patriot Front showed up at Idaho Pride, the Proud Boys showed up at Drag Queen Story Hour events at public libraries, there have been arsons on gay bars, etc. So much of the hateful anti-LGBT rhetoric and narratives that have led up to these attacks has been perpetuated on social media platforms. 

It’s important to note that we also have these political figures and “media figures,” pundits, who are perpetuating hate and extremist ideas for political purposes, but also for profit. One of the things we see is this grift of, “Oh, here’s all of this hate and these conspiracy theories” that then ends with, “Hey, come subscribe to the channel to get more of this.” These figures are making money, but the even more horrible thing is that the platforms themselves are also profiting from this hate. So the platforms have an inherent conflict of interest when it comes to our requests for them to mitigate that content. Which is frustrating to say the least.

Sophia Crabbe-Field: How much have you seen these trends and this rhetoric amplified in recent years? And was that a motivation for launching the Social Media Safety Index and, more recently, the scorecard?

JO: Things are so bad and getting worse. Right-wing media and pundits just continue to find new and different ways of spreading hateful messages. We did the first Social Media Safety Index last year, in 2021. The idea was to establish a baseline and to look at five platforms to determine what the current state of affairs is and say, “Okay, here’s some guidance, here are some recommendations.” Our other colleagues working in this field make many of the same basic recommendations, even if it’s for different identity-based groups, or different historically marginalized groups; we’re all being targeted in corollary ways. So, for instance, our guidance emphasizes things like fixing algorithms to stop promoting hate and disinfo, improving content moderation (including moderation across all languages and regions), stepping up with real transparency about moderation practices, working with researchers, respecting data privacy—these are things that can make social media platforms safer and better for all of us.

So we did this general guidance and these recommendations, and then met with the platforms. We do have regular ongoing meetings throughout the year on kind of a rapid-response basis; we alert them to things as they come up. They’re all different but some of them ask for our guidance or input on different features or different functionalities of their products. So we thought that for the second edition of the report in 2022, we’ll do scorecards and see how the companies did at implementing our 2021 guidance and, at the same time, we’ll have a more rigorous numeric rating of a set of elements.

What we end up hearing the most about is hate speech, but it’s important to note that LGBTQ people are also disproportionately impacted by censorship.

LU: You came up with your own very specific set of indicators, but inspired by our methodology. Why did you choose to base yourself off of RDR’s methodology and how would you explain the thought process beyond the adaptation?

JO: We had been thinking about doing a scorecard and trying to decide how to go about that. We knew that we wanted to lean on someone with greater expertise. We looked to Ranking Digital Rights as an organization that is so well respected in the field. We wanted to do things in a rigorous way. We connected with RDR and you guys were so generous and amenable about partnering. RDR then connected us with Goodwin Simon Strategic Research, with Andrea Hackl (a former research analyst with RDR) as the lead research analyst for the project. That was such an amazing process and, yes, a lot of work. With Andrea, we went about developing the 12 unique LGBT-specific indicators and then Andrea attended some meetings with leaders at the intersection of LGBT, tech, and platform accountability and honed those indicators a little more and then dug into the research. For our purposes, the scorecard seemed like a really powerful way to illustrate the issues with the platforms and have them measured in a quantifiable way.
Though it’s called the “Social Media Safety Index,” we’re looking not only at safety, but also at privacy and freedom of expression. We developed our indicators by looking at a couple of buckets. The first being hate and harassment policies: Are LGBTQ users protected from hate and harassment? The second area was around privacy, including data privacy. What user controls around data privacy are in place? How are we being targeted with advertising or algorithms? Then the last bucket would be self-expression in terms of how we are, at times, disproportionately censored. Finally, there is also an indicator around user pronouns: Is there a unique pronoun field? Due to lack of transparency, we can’t objectively measure enforcement.


A clip from the 2022 GLAAD Social Media Safety Index Scorecard. 

What we end up hearing the most about is hate speech, but it’s important to note that LGBTQ people are also disproportionately impacted by censorship. We’re not telling the platforms to take everything down. We’re simply asking them to enforce the rules they already have in place to protect LGBTQ people from hate. 

One thing about self-expression: There’s a case right now at the Oversight Board related to Instagram that we just submitted our public comment on. A trans and non-binary couple had posted photos that were taken down “in error” or “unjustly”⸺they shouldn’t have been taken down. That’s an example of disproportionate censorship. And that relates back to another one of the indicators: training of content moderators. Are content moderators trained in understanding LGBTQ issues? And the corresponding recommendations we’ve made are that they should make sure to train their content moderators on LGBTQ-specific issues, they should have an LGBTQ policy lead, and so on. 

SC-F:  When you were putting together the indicators, were you thinking about the experience of the individual user or were you also thinking more so or equally about addressing broader trends that social media platforms perpetuate, including general anti-LGBTQ violence?

JO: I would say it’s a combination of both. The end goal is our vision of social media as a safe and positive experience for everyone, especially those from historically marginalized groups. I think it’s so important to state this vision, to say that it could be like this, it should be like this.

One of the things that we really leaned in on is the policy that Twitter established in 2018 recognizing that targeted misgendering and deadnaming of trans and non-binary people is a form of anti-LGBT and anti-trans hate. It is a really significant enforcement mechanism for them. And so in last year’s report we said everyone should follow the lead of Twitter and add an explicit protection as part of their hate speech policies. We met with TikTok following last year’s SMSI release and, in March of 2021, they added that to their policy as well. This was really great to see and significant when it comes down to enforcement. We continue to press Meta and YouTube on this. Just to be clear, targeted misgendering and deadnaming isn’t when someone accidentally uses someone’s incorrect pronouns. This is about targeted misgendering and deadnaming, which is not only very vicious, but also very popular.

SC-F: Did your research have any focus on policies that affect public-facing figures, in addition to policies geared toward the general user?

JO: In fact, over the last approximately 6-9 months, targeted misgendering and deadnaming has become pretty much the most popular form of anti-trans hate on social media and many of the most prominent anti-LGBTQ figures are really leaning into it.

Some particularly striking examples of its targets include Admiral Rachel Levine, a transgender woman who’s the United States Assistant Secretary for Health. There’s also Lia Thomas, the NCAA swimmer and Amy Schneider, who was recently a champion on Jeopardy. Admiral Levine has been on the receiving end of attacks from the Attorney General of Texas Ken Paxton, as well as from Congresswoman Marjorie Taylor Greene. Most recently, Jordan Peterson, the right-wing extremist, attacked Elliot Page on Twitter, again using malicious targeted misgendering and deadnaming.

Platforms all have different policies related to public figures. In the Jordan Peterson example, Twitter took that down because it’s considered violative of their policy. But the YouTube video where he misgenders and deadnames Elliot Page like 30 times has more than 3 million views. And he’s expressing intense animosity toward trans people. YouTube did at least demonetize that video, as well as another Peterson video which expressed similar anti-trans views and outrageously described gender-affirming care (which is recognized as the standard of care by every major medical association in the U.S.) as “Nazi medical experiment-level wrong.” But they did not take the videos down despite their stated policy of removing content that attacks people based on their gender identity. So we continue to fight on these fronts that are about protecting individual people and the community in general.

If you look at TikTok and you look at Jordan Peterson’s channel on TikTok it’s like he’s a different person (than on YouTube, for example) because TikTok just has a very low threshold for that kind of garbage.

LU:  Were there any unexpected or surprising findings to come out of this year’s research? Especially when we look at the differences in policies between various companies in addressing some of the above-mentioned issues?

JO: We were already pretty familiar with what the different policies are. The issue centers around enforcement and around the company culture or attitude toward enforcement. I think that the most surprising thing to me was actually that all the companies got such low ratings. I expected them not to do well. We know from anecdotal experience that they are failing us on many fronts, but I was really actually surprised at how low their ratings were, that none of them got above a 50 on a scale of 100.

Next year we’ll have new, additional indicators. I think there are some things that aren’t totally captured by the scorecard, but I’m not sure how to capture them. For instance, here’s an anecdotal observation that’s interesting to note about the difference between TikTok and the other platforms. (By the way, sometimes I’ll say a nice thing about one of the platforms and it’s not like I’m saying they’re so great or they’re better than everyone else, there are some things that some are better at, but they all failed and they all need to do better.) If you look at TikTok and you look at Jordan Peterson’s channel on TikTok it’s like he’s a different person (than on YouTube, for example) because TikTok just has a very low threshold for that kind of garbage. It’s clear that TikTok has said, “No you can’t say that stuff.” They’re monitoring the channel in such a way where it’s just not allowed. And, again anecdotally, it feels like Meta and YouTube have a much higher threshold for taking something down. There are nuances to these things. As LGBTQ people, we also don’t want platforms over-policing us.

LU: How hopeful are you that companies will do what needs to be done to create a safer environment given all the incentives they have not to? 

JO: We do this work as a civil society organization and we’re basically saying to these companies: “Hey, you guys, would you please voluntarily do better?” But we don’t have power so the other thing that we’re doing is saying there needs to be some kind of regulatory solution. But, ultimately, there needs to be accountability to make these companies create safer products. Dr. Joan Donovan at the Shorenstein Center has a great piece that I often think about that talks about regulation of the tobacco industry in the 1970s and compares it with the tech industry today, looking at these parallels with how other industries are regulated and how there are consequences if you have an unsafe product. If the Environmental Protection Agency says, “You dumped toxic waste into the river from your industry, you have to pay a $1 billion fine for that,” well then the industry will say, “Okay, we’ll figure out a solution to that, we won’t do that because it’s going to cost us a billion dollars.” The industry is forced to absorb those costs. But, currently, social media and tech in general is woefully under-regulated and we, as a society, are absorbing those costs. Yes, these are very difficult things to solve but the burden has to be on the platforms, on the industry, to solve them. They’re multi-billion dollar industries, yet we’re the ones absorbing those costs.

It’s not like the government is going to tell them: “You can say this and you can’t say that.” That’s not what we’re saying. We’re saying that there has to be some kind of oversight. It’s been interesting to see the DSA [the European Digital Services Act] and to see things happening all over the world that are going to continue to create an absolute nightmare for the companies and they are being forced to deal with that.

Social media and tech in general is woefully under-regulated and we, as a society, are absorbing those costs. Yes, these are very difficult things to solve but the burden has to be on the platforms to solve them.

SC-F: You’ve already touched on the difficulty of measuring enforcement because of the lack of transparency, but from what you’re able to tell, at least anecdotally, do you see a wide schism between the commitments that are made and what plays out in the real-life practices of these companies? 

JO: I have two things to say about that. Yes, we have incredible frustration with inadequate enforcement, including things that are just totally blatant, like the example I just mentioned of YouTube with the Jordan Peterson videos. It was an interesting thing to feel like, on the one hand, it’s a huge achievement that YouTube demonetized those two videos, which means they are saying this content is violative. But it’s extremely frustrating that they will not actually remove the videos. In YouTube’s official statement in response to the release of the SMSI they told NBC News: “It’s against our policies to promote violence or hatred against members of the LGBTQ+ community.” Which quite frankly just feels totally dishonest of them to assert when they are allowing such hateful anti-LGBTQ content to remain active. They say they’re protecting us, but this is blatant evidence that they are not. As a community we need to stand up against the kind of hate being perpetuated by people like Jordan Peterson, but even more so we should be absolutely furious with companies like YouTube that facilitate that hate and derive enormous profits from it.


Source: GLAAD’s 2022 Social Media Safety Index. 

In addition to our advocacy efforts pressing the platforms on their policies there are many other modes of activism we’re trying to be supportive of. The folks at Muslim Advocates have launched a lawsuit against Meta based on the concept of Facebook engaging in false advertising. Meta says that their platforms are safe, that their products are safe, but they’re not. I think it’s being referred to as an “experimental lawsuit.” It’s exciting to me that there are different kinds of approaches to this issue that are being tried out. Another approach is shareholder advocacy, there’s exciting stuff there. There are also things like No Hate at Amazon, an employee group opposing all forms of hate within the company, which actually did a die-in a couple months ago over the company’s sale of anti-trans books. 

SC-F: Do you feel at all hopeful that the scorecard might lead to more transparency that would, eventually, allow for better monitoring of enforcement?

JO: I’m not naïve enough to believe that the companies are just going to read our recommendations and say “Oh wow, thank you, we had no idea, we’ll get right on that, problem solved, we’re all going home.” This kind of work is what GLAAD has done since 1985: create public awareness and public pressure and maintain this public awareness and call attention to how these companies need to do better. There are times when it feels so bad and feels so despairing like, “Oh, we had this little tiny victory but everything else feels like such a disaster.” But then I remind myself: This is why this work is so important. We do have small achievements and we have to imagine what it would be like, how much worse things would be, if we weren’t doing the work. I’m not naïve that this is going to create solutions in simple ways. It is a multifaceted strategy and, as I mentioned a minute ago, it is also really important that we’re working in coalition with so many other civil society groups, including with Ranking Digital Rights. It’s about creating visibility, creating accountability, and creating tools and data out of this that other organizations and entities can use. A lot of people have said, “We’re using your report, it’s valuable to our work.” In the same way that last year we pointed to RDR’s report in our 2021 SMSI report.

A lot of people have said, “We’re using your report, it’s valuable to our work.” In the same way that last year GLAAD pointed to RDR’s report in our 2021 SMSI report.

LU: That’s great. You had this huge impact already with TikTok this past year. In our experience as well, change is slow. But you’re moving the machinery within these companies. Tied to that, and to the impact you’re making with the SMSI, I think our last question for today would be: How do you envision the SMSI being used? Because, on one hand, GLAAD is doing their own advocacy, their own campaigning, but at the same time you’re putting this out there for others to use. Do you hope that the LGBTQ community and end users will use this data on their own? Do you expect that more community leaders and advocates in the field will use this information for their own advocacy? How do you see that playing out in the coming months and complementing the work GLAAD is doing? 

JO: Thanks for saying that we’ve done good work. 

One of the amazing things about the report is that it came out about three, four weeks ago, and we got incredible press coverage for it. And that’s just so much of a component: public awareness and people understanding that this is all such an enormous problem. But also building public awareness in the LGBTQ community. Individual activists and other colleagues being able to cite our work is very important: cite our numbers, cite the actual recommendations, cite the achievements.

It does feel really important that GLAAD took leadership on this. I was brought on two years ago as a consultant on the first Social Media Safety Index and we saw that this work was not being done. That was the first ever “Okay, let’s look at this situation and establish this baseline and state that this is obviously a problem, and here’s the big report.”

But at the same time, I just have a lot of humility because there are so many people doing this work. There are so many individual activists that inspire me, it’s so moving. Do you know who Keffals is? She’s a trans woman, a Twitch streamer in Canada. She does such amazing activism, such amazing work — especially standing up as a trans activist online and on social media, and just this past weekend she was swatted. Swatting is when someone maliciously calls the police and basically reports a false situation to trigger a SWAT team being sent to a person’s house. So they showed up at her house and busted down her door and put a shotgun in her face, terrifying her and taking her stuff and taking her to jail; it’s just horrifying. And, like doxing, it’s another common and terrible form of anti-LGBTQ hate unique to the online world which manifests in real-world harms. She just started talking about this yesterday and it’s all over the media. She’s putting herself on the line in this way and being so viciously attacked by anti-trans extremists Anyway, it’s so powerful for people like her to be out there, courageously being who they are, and they deserve to be safe. I’m grateful that we get to do this work as an organization and that it’s useful to others. And I’m just humbled by the work of so many activists all over the world. 

If you’re a researcher or advocate interested in learning more about our methodology, our team would love to talk to you! Write to us at partnerships@rankingdigitalrights.org.

In its response to our letter campaign with Access Now, Meta takes issue with aspects of its score in RDR’s 2022 Big Tech Scorecard. Here’s why we stand by our results.

Ranking Digital Rights wishes to address Meta’s response to the letter campaign led by Access Now, in coordination with RDR. As part of this campaign, Meta, along with all the companies we ranked in our 2022 Big Tech Scorecard, was asked to make one improvement on their human rights performance. This year, Access Now called on Meta to be more transparent about government censorship demands, particularly those targeting WhatsApp and Facebook Messenger. While several companies issued responses, Meta’s was unique in raising questions about RDR’s standards and findings.

Meta’s response made a number of claims that we have decided to address directly below.

  1. Meta’s claim: RDR’s standards are unattainable.

    What our data says: Meta notes that “it’s important that there be ambitious goals…but also that at least some of these be attainable.” Yet all of the goals set forth in RDR’s indicators are attainable. However, they require that corporate leadership dedicate time and willpower to fulfilling them. For example, when the inaugural RDR Index was released in 2015, none of the ranked companies disclosed any data on what content and accounts they restricted for breaching their own rules. As of our latest Scorecard, companies that do not disclose this information are quickly becoming the outlier. Similarly, even companies that already score well can make considerable progress from year to year.

  2. Meta’s claim: The Big Tech Scorecard doesn’t give points for publishing the results of human rights due diligence processes.

    What our data says: Meta claims that the Scorecard does not consider “criteria related to communicating insights and actions from human rights due diligence to rights holders.” It is true that our human rights impact assessment (HRIA) indicators focus on procedural transparency rather than simply the publication of results. We do recognize that Meta has coordinated with reputable third parties such as BSR and Article One Advisors to publish several abbreviated country-level assessments as well as to guide its work on expanding encryption. However, it has yet to demonstrate the same degree of transparency on issues that are fundamental to how it operates, including targeted advertising and algorithms. In addition, its country-level assessments have notable gaps. Human rights groups have raised serious questions about the lack of information Meta shared from its India HRIA in its inaugural human rights report. This HRIA was meant to evaluate the company’s role in spreading hate speech and incitement to violence in that country. Societies where Meta has a powerful and rapidly growing presence deserve more than a cursory view of the company’s impact, especially when Meta is being directly linked to such explicit human rights harms.

  3. Meta’s claim: RDR should have given Meta a higher score for its purported commitment to human rights standards in the development of AI.

    What our data says: Meta points to its Corporate Human Rights Policy, arguing that it “clearly specifies how human rights principles guide Meta’s artificial intelligence (AI) research and development” and questioning why our Scorecard “indicate[s] [Meta] do[es] not commit to human rights standards in AI development.” The problem is: Meta’s human rights “commitment” on AI falls short of actually committing. Our findings acknowledge an implied commitment to these standards (which equates to partial credit). For example, their policy states that human rights “guide [Meta’s] work” in developing AI-powered products and that Meta “recognize[s] the importance of” the OECD Principles on Artificial Intelligence. We encourage Meta to make its commitment to human rights in the development and use of AI an explicit one.

  1. Meta’s claim: RDR unfairly expects “private messaging” services to meet the same transparency standards as other services.

    What our data says: By inquiring about the factors RDR considers when “requir[ing] private messaging services, including encrypted platforms, to conform to the same transparency criteria as social media platforms,” Meta seems to be implying that we do not understand how their products work or that our indicators are not fit for purpose with respect to so-called “private messaging” services like Messenger and WhatsApp.

    To start with, Facebook Messenger, the more popular of the two apps in the U.S., is not even an encrypted communications channel (at least not yet). Meanwhile, many users are not fully aware of how “private” (or not) a messaging service is when they sign up for it. There is abundant evidence that Meta monitors Messenger conversations, ostensibly for violative content, but the precise mix of human and automated review involved remains a mystery. As efforts to strip people of their reproductive rights continue to grow, Meta has a responsibility to shine a light on government demands for users’ messages and information. Law enforcement in U.S. states where abortion is now illegal have successfully obtained Messenger chats that eventually led to criminal charges. Finally, even for encrypted platforms like WhatsApp, our standards call for companies to be as transparent as possible regarding automated filtering, account restrictions, and other enforcement actions. Transparency on such basic protocols shouldn’t be too big of an ask.

Meta also notes its plan to build out its disclosures on government demands for content restrictions. This is an encouraging sign. In particular, Meta announced that it plans to publish data on content that governments have flagged as violating the company’s Community Standards—a tactic governments often use to strong-arm companies into compliance without due process. It also committed to start notifying users when content is taken down for allegedly violating a law. Our indicators have long called for companies to enact these two measures. Still, much work remains, not all of which is reflected in Meta’s plans.

The issues Meta has expressed about how our standards pertain, in this case, to transparency on government censorship demands. This means that our most fundamental concern about Meta’s human rights record remains unaddressed: The company’s business model still relies almost entirely on targeted advertising. Meta does not report on the global human rights impacts of its targeting systems and publishes no data on how it enforces its advertising policies. These omissions are unjustifiable. There is widespread agreement that a business model powered by mountains of user data generates harmful incentives and ultimately leads to human rights harms. Even Meta’s shareholders are vigorously supporting calls to assess these harms, only to be stymied by Mark Zuckerberg’s inflated decision-making power.

Without addressing the problems that lie at the root of many of its human rights impacts or recognizing the need for systemic change, Meta will continue to “nibble around the edges,” as shareholders have argued in recent calls to action. Along with our allies, RDR will continue to push Meta and other Big Tech companies to achieve the standards needed to uphold human rights. We do so with the knowledge that what we are asking for from companies is not only fully achievable, but also very much essential. Meta can do better; they just have to commit to try.


Investor advocacy and shareholder action on human rights topics have reached unprecedented levels this year. Five of the largest global tech companies—Alphabet (Google), Amazon, Apple, Meta, and Twitter—all faced a record number of shareholder resolutions. The rise of investing based on environmental, social, and governance (ESG) factors has been a key driver of this trend. Civil society groups like RDR are providing the human rights standards and stories that allow investors to evaluate their holdings’ commitment to a better tomorrow.

Today we are proud to meet the evolving needs of investors with an update to our Investor Guidance page. Working with the investor community is in RDR’s DNA. We developed our very first Corporate Accountability Index in 2015 in partnership with leading ESG research provider Sustainalytics. Our work since then has been suffused with investor partnerships aimed at better protecting human rights. Today’s update illuminates how our work with shareholders has evolved in light of the surge of ESG-driven investor engagement and what digital rights topics have emerged as key investor priorities.

  • First, we are publishing more details about the impact of our work with investors. This includes joint undertakings with individual asset managers, but also sweeping projects like the Digital Rights Engagement Initiative, coordinated by the Investor Alliance for Human Rights. The initiative consists of coordinated outreach to individual companies by the 177 signatories of the Investor Statement on Corporate Accountability for Digital Rights, which calls on companies to report on their progress on digital rights and is based on RDR’s standards.
  • Second, we are updating and enriching our shareholder resolutions data with information about the outcomes of each resolution, including the result of the final vote. We are also bringing together stories about the direct and indirect impact of these votes: news reports, company announcements, and new campaigns inspired by each resolution. With this update, we are also marking resolutions that cite RDR and those whose development we supported directly.
  • Finally, we are creating a separate “Spotlight” space highlighting insights from members of the RDR team on topics we consider critical to both shareholders’ rights and to our human rights-based mission. Our inaugural Spotlight is our mini-report on bringing down barriers to shareholder advocacy.

Delving into the new data we are publishing today highlights noteworthy trends in investor behavior. Shareholders are revealing an increasingly nuanced understanding of the human rights impact of companies’ existing and emerging operations. Alphabet (Google), for instance, faced a petition this year calling on the tech giant to assess the impacts of its plans to build data centers in human rights hotspots such as Saudi Arabia. At Amazon, resolutions calling out the human rights violations enabled by its facial recognition and surveillance products continued to gain traction, winning a robust 40% of shareholder votes. Calls at both Meta and Google to terminate their multi-class share structures, which allow powerful executives to artificially dilute majority support for such resolutions, won near-unanimous support from independent shareholders, setting an all-time record.

Meanwhile, RDR’s involvement in shareholder resolutions has also evolved in the past two years: from providing data points to directly shaping them alongside activist investors. Our Scorecards provide a balanced assessment of more than two dozen companies. Where we see a company’s disclosures on a key topic persistently lagging behind, we help forge collective efforts to push them to improve. This year, we helped craft a proposal that called on Meta to assess the human rights impacts of its targeted ad-based business model, which won the support of over 70% of independent shareholders. We also worked with shareholders to call for an assessment of Google’s FLoC ad technology, which likely influenced the company’s decision to terminate the program.

Improving the behavior of powerful actors requires persistent effort over time. This understanding is baked into our research and rankings, which track the yearly ebb and flow  of companies’ disclosures about how they protect users’ rights. It is baked into our policy engagement, which provides guidance for lawmakers to shape new legislation. It is baked into our work with advocacy partners around the world who adapt our methodology to create new windows of scrutiny. And it is baked into our collaboration with investors, who represent an increasingly powerful source of pressure on companies to act responsibly. We strive to connect these streams whenever possible.

Civil society watchdogs, the responsible investor community, and those working to reform companies from within share a common goal: strengthening corporate accountability and protecting human rights. Today’s update is one more step toward bringing these communities together and showing how their common goal can be achieved.

Ranking Digital Rights has once again partnered with Global Voices Translation Services to translate the executive summary of the 2022 Big Tech Scorecard into six major languages: Arabic, Chinese, French, Korean, Russian, and Spanish!

The RDR Big Tech Scorecard evaluates 14 companies, whose products and services are used by over four billion people worldwide, in all kinds of cultures and contexts. The languages of our translations represent the most commonly spoken languages in the countries where the companies we rank are located, and therefore reflect the global nature of our work.

Key components of our 2020 methodology, including our 2020 revisions, are also available in Spanish, French, and Arabic. This document provides a great practical tool for anyone around the world who wishes to use and adapt our standards and build unique advocacy campaigns adapted to their goals and local contexts.

For example, over the past year, civil society organizations in West and Southern Africa, as well as South and Southeast Asia, have adapted our methodology to study local tech sectors in a total of 10 countries. Projects carried out by Paradigm Initiative in Angola, the DRC, and the Central African Republic, and by EngageMedia in six Southern Asian countries, used our standards to evaluate local telecommunication companies. Meanwhile, the Internet Freedom Project Lesotho evaluated financial services, in addition to telcos. Making our resources available in multiple languages is therefore a key part of our strategy to expand the reach and impact of our rankings and standards.

With these translations, we hope to support broader advocacy actions that can leverage this data and analysis to hold more companies accountable for policies that better respect people’s human rights online.

Translations for the Telco Giants 2022 Scorecard—forthcoming this fall—will cover 12 companies based in 12 different jurisdictions: Spain, the UK, the U.S., Norway, Germany, France, South Africa, Mexico, Malaysia, India, UAE, and Qatar. You can also visit our translations page for translations from previous years.

It takes a village: We thank Global Voices for their work on the translations, as well as our regional partners for their help in reviewing and promoting these materials!

Get in touch: If you’re a researcher or advocate interested in learning more about our methodology, our team would love to talk to you! Write to us at info@rankingdigitalrights.org.

How GLAAD is keeping social media companies accountable through its Social Media Safety Index and its new Platform Scorecard.

Last month, hate and harassment of LGBTQ social media users made headlines when actor Elliot Page was deadnamed and misgendered on Twitter by conservative author and academic Jordan Peterson. Peterson’s account was suspended for violating Twitter’s hateful conduct policy, which prohibits targeted deadnaming—referring to someone’s name prior to transition—and misgendering. (Twitter is one of only a few social media platforms, including TikTok and Pinterest, with such a policy.) The past year has shed tremendous light on the hate and harassment faced in online environments by members of the LGBTQ community, thanks to widespread media coverage and research. According to a recent report by the Anti-Defamation League (ADL), two-thirds of LGBTQ respondents say they have experienced harassment online.

The digital rights community has long called on social media platforms to do more to create a safer online environment for LGBTQ people and other vulnerable communities. With GLAAD’s new 2022 Social Media Safety Index (SMSI) report and its Platform Scorecard, the community now has a tool in hand to hold companies to account for how their publicly disclosed policy commitments impact LGBTQ expression, privacy, and safety online.

For this first-ever Scorecard, Goodwin Simon Strategic Research (GSSR), the independent public opinion research firm, partnered with GLAAD and RDR to create an accountability tool building on RDR’s rigorous methodology and best practices. GLAAD’s inaugural SMSI report in 2021 laid bare the existing state of LGBTQ safety, privacy, and expression on the platforms. It also set forth the eventual goal of evaluating the platforms using a standardized scorecard. Given the critical role that RDR plays in holding major tech companies accountable for respecting user rights, GLAAD looked no further than RDR’s standards and best practices when setting out to develop the new 2022 SMSI Scorecard.

Across 12 indicators, GLAAD assessed how Twitter, Instagram, Facebook, YouTube, and TikTok’s publicly disclosed policies impact their LGBTQ users. These indicators draw on best practices and guidelines, as well as feedback from RDR, while more directly addressing issues impacting LGBTQ users. For example, the first indicator looks at companies’ disclosed policy commitments to protect such users (for more details, you can read our one-pager describing our scorecard development and see the full list of indicators).

This first iteration of the Platform Scorecard this year shows that leading social media platforms fail at adequately protecting their LGBTQ users. None of the five companies that GLAAD evaluated had a combined score, across all indicators, of more than 50%. In the report, GLAAD highlights several areas where tech companies need to do better. Of note: There is a clear lack of transparency across the board. Some of the most glaring findings include:

  • Companies lack transparency about what options users have to exert control over whether and how information related to sexual orientation and gender identity is collected, inferred, and used by platforms to draw conclusions about LGBTQ people’s identities.
  • In particular, companies disclose little regarding what control users have over whether they are shown content based on this information. Users should not be shown content based on their gender identity or sexual orientation unless they explicitly opt-in.
  • Companies also lack transparency about the steps they take to address demonetization and wrongful removal of legitimate LGBTQ-related content from ad services. This means, for example, that when LGBTQ creators’ content is suppressed or removed by platforms, the companies share little information with the creator explaining why.
  • While all of the companies we evaluated claim to engage with organizations representing LGBTQ people, none of the companies disclose the appointment of an LGBTQ policy lead to ensure that the companies’ policies reflect the true needs of LGBTQ users.
  • Twitter and TikTok are currently the only two platforms GLAAD evaluated that have policies prohibiting targeted deadnaming and misgendering. TikTok adopted this prohibition in response to the release of the 2021 SMSI. (It’s worth noting that the general lack of transparency from platforms means that the SMSI cannot assess the companies’ enforcement of stated policies, including this one.)

For each of the companies in the report, GLAAD lays out clear policy recommendations that companies should implement in order to create a safer online environment for their LGBTQ users. For instance, social media companies should emulate Twitter and TikTok by prohibiting targeted deadnaming and misgendering. Other recommendations include a ban on potentially harmful/and or discriminatory advertising content, the disclosure of training for content moderators, and a commitment to continuously diversifying the company’s workforce. As mentioned, companies should also hire an LGBTQ policy lead as part of their human rights teams to oversee the implementation of these policy commitments and ensure that they are truly reflective of users’ needs. Following such recommendations would not only allow companies to create a safer online environment for the LGBTQ community, but would also potentially lead the way on progress for other vulnerable communities and under-represented voices. For example, a commitment to diversifying a company’s workforce would help make sure that people of color, people with disabilities, as well as other groups are represented within the company and included in the development and implementation of the company’s policies, products, and services.

Thanks to GLAAD’s SMSI and Platform Scorecard, the digital rights community is now able to track companies’ progress on commitments to policies meant to protect LGBTQ users. GLAAD is holding ongoing briefings with each platform to review issues that LGBTQ users face and advocate for the recommendations described in the report. GLAAD’s Social Media Safety program maintains an ongoing dialogue about LGBTQ safety amongst tech industry leaders. It also spotlights new and existing safety issues facing LGBTQ users in real-time, both to the platforms and to the press and public.

The threats that LGBTQ users, as well as those from other vulnerable communities, face online are many and are constantly evolving. Therefore, GLAAD hopes that the SMSI will continue to expand and grow to include indicators on other pressing LGBTQ-related policy issues, including disinformation on gender-affirming care. The Scorecard has an important role to play in helping to create an online environment that might finally allow LGBTQ users to express themselves both fully and safely online.

Read the full GLAAD Social Media Safety Index and Platform Scorecard.