Since 2018, organizations across various socio-political contexts have adapted RDR’s methodology to hold platforms and other ICT services accountable. What do they each have in common? Protecting the rights of some of the globe’s most marginalized and discriminated against populations. Two of the most successful and notable cases of adaptations include: measuring how well social media companies protect LGBTQ rights online and evaluating messaging apps under conditions of state repression in Iran.

The former was conducted by American LGBTQ media advocacy organization GLAAD to create their now yearly Social Media Safety Index (SMSI), which includes a Platform Scorecard evaluating five top platforms (Facebook, Instagram, Twitter, YouTube, and TikTok) on how well they protect LGBTQ users. Some of the main issues identified by Jenni Olson, the Senior Director of the Social Media Safety Program, as harming LGBTQ users include: inadequate content moderation and enforcement, harmful algorithms, and a lack of transparency. The SMSI helps shed light on how online hate speech, as well as misinformation and conspiracy theories, toward LGBTQ people is able to spread and manifest online unchecked. Each year, the SMSI has made headlines. But the 2023 SMSI stirred the most controversy for the large drop in score experience by Twitter. The company, now under the helm of Elon Musk, fell 12 points (all other platforms saw their score increase) and became the most dangerous company for LGBTQ people this year.

The second adaptation was used to create a report called “Digital Rights & Technology Sector Accountability in Iran” through a joint collaboration between Filterwatch and Taraaz, examining both local and international messaging apps. When Roya Pakzad, Taraaz’s Founder and Director, last spoke to us about her work on the report, protests were ongoing over the death of 22-year-old Mahsa Amini, who died in police custody after being arrested over the “improper” wearing of her hijab. The mass mobilization that followed was sparked over social media and was met with internet shutdowns and outages. Meanwhile, the government was being accused of pushing through the “draconian” Internet User Protection Bill, which may vastly curtail what Iranians could access on the web.

These two conversations with Jenni and Roya were first published in 2022, as part of our interview series, Digital Rights Dialogues. Among other things, we spoke to them about why they chose to use the RDR methodology, how they adapted it, and how it is helping them achieve their goals:

Jenni Olson: We had been thinking about doing a scorecard and trying to decide how to go about that. We knew that we wanted to lean on someone with greater expertise. We looked to Ranking Digital Rights as an organization that is so well respected in the field. We wanted to do things in a rigorous way. We connected with RDR and you guys were so generous and amenable about partnering. RDR then connected us with Goodwin Simon Strategic Research, with Andrea Hackl [a former research analyst with RDR] as the lead research analyst for the project. That was such an amazing process and, yes, a lot of work. With Andrea, we went about developing the 12 unique LGBT-specific indicators and then Andrea attended some meetings with leaders at the intersection of LGBT, tech, and platform accountability and honed those indicators a little more and then dug into the research. For our purposes, the scorecard seemed like a really powerful way to illustrate the issues with the platforms and have them measured in a quantifiable way.

Though it’s called the “Social Media Safety Index,” we’re looking not only at safety, but also at privacy and freedom of expression. We developed our indicators by looking at a couple of buckets. The first being hate and harassment policies: Are LGBTQ users protected from hate and harassment? The second area was around privacy, including data privacy. What user controls around data privacy are in place? How are we being targeted with advertising or algorithms? Then the last bucket would be self-expression in terms of how we are, at times, disproportionately censored. Finally, there is also an indicator around user pronouns: Is there a unique pronoun field? Due to lack of transparency, we can’t objectively measure enforcement.

What we end up hearing the most about is hate speech, but it’s important to note that LGBTQ people are also disproportionately impacted by censorship. We’re not telling the platforms to take everything down. We’re simply asking them to enforce the rules they already have in place to protect LGBTQ people from hate. 

I’m not naïve enough to believe that the companies are just going to read our recommendations and say “Oh wow, thank you, we had no idea, we’ll get right on that, problem solved, we’re all going home.” This kind of work is what GLAAD has done since 1985: create public awareness and public pressure and maintain this public awareness and call attention to how these companies need to do better.

There are times when it feels so bad and feels so despairing like, “Oh, we had this little tiny victory but everything else feels like such a disaster.” But then I remind myself: This is why this work is so important. We do have small achievements and we have to imagine what it would be like, how much worse things would be, if we weren’t doing the work. I’m not naïve that this is going to create solutions in simple ways. It is a multifaceted strategy and, as I mentioned a minute ago, it is also really important that we’re working in coalition with so many other civil society groups, including with Ranking Digital Rights. It’s about creating visibility, creating accountability, and creating tools and data out of this that other organizations and entities can use. A lot of people have said, “We’re using your report, it’s valuable to our work.”

Roya Pakzad: For a long time now, the Iranian digital rights ecosystem has been Iranian people resisting government censorship and the Iranian government trying to censor the internet. If you read literature from 2008 until 2016, you see that civil society wasn’t really focusing on the role of companies in their digital rights advocacy. The focus [of the report] was mainly on government censorship. So we wanted to say, “Oh no, there are so many actors in the middle, and we have to focus on them because they have a responsibility too.” Part of that was just introducing the idea of corporate social responsibilities.

We wanted to introduce GNI (Global Network Initiative), a non-governmental organization that assists companies in respecting freedom of expression and privacy rights when faced with pressure from governments to hand over user data or remove or restrict content, into the conversation. We wanted to introduce multi-stakeholder engagement. We wanted to introduce human rights impact assessments. We wanted to introduce Ranking Digital Rights’s great index and show that you can use that for evaluating yourself as a company, or journalists can use it to evaluate you. That’s why we didn’t just pick certain indicators [from Ranking Digital Rights’s methodology, we used all of them, because the main purpose was an educational approach with the idea of business and human rights, introducing all of the ideas of human rights due diligence and human rights impact assessment policies.

We did have to adapt for the context of Iran and its current lack of discussion about business and human rights. And the other thing we noted is e-government services being an add-on to other services. The government incentive to use Iranian messaging apps also means you pay less than you do to use Telegram or WhatsApp, because the data that you pay for costs less than data to access foreign apps. If you don’t have enough money to pay for VPNs, it means that you can only use Iranian apps, which penalizes people because of their socio-economic status, as the government changes the tariff for data, for example. In the context of Iran, we had to pay attention to the narrative that we use and explain why we are using privacy and freedom of expression indicators and mixing them with a discussion of the socio-economic context.

[A colleague] and I also recently worked with the Iran Academia’s MOOCs program to record a lecture based on our RDR report. We have seen a lot of attention directed at the role of technology companies and technologists in digital rights in Iran. The gap that we saw back in 2017, with regards to the lack of attention to the private sector, has been shrinking dramatically in just a year. We have seen so much mobilizing, dialogue, and resistance from the tech ecosystem in Iran against government policy, like tech companies putting up banners on their websites publicly announcing their objection to the [Internet User Protection bill]. There have also been cases of naming and shaming public-private partnerships and contracts.

Companies have told us, informally and through back channels, that they are interested in using the workbook [we produced with the report] to revise their policies and update them. A non-ranked company even asked me to give a talk in their forums and for their employees (which I decided not to do, because I was worried about getting them in trouble). We have seen ICT journalists inside the country using approaches from the RDR Index to compare company policies.

Company engagement is something that we learned a lot from. The companies that we evaluated completely ignored us, to be honest with you. Sometimes we saw that some people from the evaluated company added us on LinkedIn. So we knew that they read the report, but they didn’t engage, even though we contacted them over email, we sent Twitter messages, we sent LinkedIn messages.

But non-evaluated companies, such as marketplace apps, said, “Oh we want to update our policies and we will use the workbook.” Because they were not evaluated they were like, ‘Okay, we are safe.’ They interacted with us and with journalists and students in tech policy; they were interested.

If you are interested in adapting the RDR methodology to achieve your corporate accountability goals in the tech sector, please get in touch at partnerships@rankingdigitalrights.org or visit our Research Lab for more information.

Zak Rogoff: Can you start by giving me your observations of what RDR is like now compared to the beginning, when you founded it?

Rebecca MacKinnon: In 2013, it was just an idea getting off the ground. We didn’t actually have an index of any kind until 2015. And in 2013, I was the only full-time employee. In the first half of 2013, I had a collaborative partnership going on with the University of Pennsylvania and a bit of funding to support some interns and fellows, and Allon Bar who I hired on contract in late 2013. So it was a very shoestring operation. We kind of cobbled it together with band-aids and paper clips and scotch tape.

We just had the idea that there needed to be a standard against which companies should be evaluated for respecting human rights. It needed to be something that made sense to investors. It needed to be something that resembled rankings and ratings of companies on other issues and in other industries. It needed to learn from what others had done in terms of what was effective and what was not.

We didn’t have the funds to actually produce the ranking for the first couple years. So we took our time doing a lot of research and consultation and producing iterative drafts of criteria before it actually became a methodology. So, in the summer of 2013, I was working with Tim Libert and Hae-in Lim, some students, and also with some researchers who were funded by Internews, to just take some draft criteria and test them out by looking at companies in some different countries and regions to figure out which criteria even were measurable or made any sense to evaluate, and which types of criteria didn’t work. Just to get some basic understanding of what made sense to include if the purpose was to incentivize company improvement and not just create sensational reports about outrageous things.

We learned a lot, but we also faced some resistance.  A number of companies now have pointed to their results in the Index, and pointed to their improvement and to RDR’s work as being useful and constructive. But in 2013, some of the same companies were not happy about the idea of a public ranking when they learned about it, and tried to convince me it was not a good idea. 

ZR: That’s a great story. And now they’re using it.

RM: At the time, a number of companies found this idea to be quite threatening. And today some of those same companies are using the Index and the methodology internally and, at least privately, if not publicly, acknowledge that is very helpful. 

ZR: Usually, when I explain to people why I think what we do makes a difference, the thing that I start with is that there are companies that actually talk to us and they say, “Look, we made this change because you asked us to do it.” It’s satisfying to know that some of those who use it now felt differently back then because obviously, as you know, there are still companies that won’t give us the time of day. But clearly that can change.

RM: Yeah, I mean I really knew that we were succeeding when a company that wasn’t in the Index approached me and asked if they could be in the Index. We only had so many resources for so many companies that we had to prioritize. We just weren’t able to include them. 

ZR: Tell me, what was your proudest moment working at RDR? 

RM: Well, there were a lot of proud moments so it’s hard to pick one. When investors started citing our data in shareholder resolutions, that was a very proud moment. When we saw Apple actually making changes in response to the shareholder resolution that cited our data, that was a very proud moment.

Another very proud moment was seeing civil society and research groups around the world using the methodology to hold industry in their regions accountable. When SMEX, in the Middle East/North Africa region, applied our methodology to telcos in that region and showed just how little was being disclosed and just how poor the policies were, that was a very proud moment. It showed that the methodology can be used in a lot of different ways. It’s not just about people in a privileged country applying criteria to people and to companies in other parts of the world. But it’s people in their own regions able to use these criteria to empower themselves to advocate with companies in their region to protect their rights better. That was a really proud moment, when we were able to see how the methodology was being picked up by people in lots of different parts of the world.

ZR: I feel like that’s one of the parts of the work that’s grown the most exponentially. I can’t even keep track of how many pots we have in the fire with different people doing adaptations of our methodology. 

RM: What’s made me most proud, especially since I’ve left and seen more adaptations coming out, is just knowing that this methodology and the work we did together – the work you all have done together since –-that impact we’ve made lives on no matter what. The methodology, the way of thinking about human rights, digital rights, and corporate accountability – RDR has left an indelible mark that’s going to continue to evolve through all kinds of research. It’s had a huge impact on how investors think about these issues. And, so no matter how RDR evolves from here, it’s going to live on in really interesting ways. The impact is going to continue to spread in ways that will be hard to predict.

And one thing that I really like is that it’s not centralized. We don’t control the methodology, we don’t control how it’s used. On one hand that might seem scary, because who knows who might do what in what strange way. But on the other hand, the fact that we don’t control it in a centralized fashion and that we’re not trying to control the IP, we’re not trying to license the use of the methodology, means that it can’t die. People are free to build off of it in different directions, in ways that empower different groups of people to use the leverage that they have available, to bring about change by companies. 

ZR: My last question on this topic is, what do you think is unique about it to this day? Obviously, when you started it, it was completely unique. There was nothing like it. But there are more people looking at these issues than there were back then. So what do you see as unique about it, especially now that you have some distance from it? 

RM: I guess one thing that’s unique about it, is that it represents the input of a lot of people from many different fields, from many different parts of the world. We were not just a group of experts who sat down and figured out a methodology. We had a few ideas, but we workshopped them with people from a range of different countries and regions with a range of different types of expertise. We then took that and workshopped it again, talked to companies about it and got a lot of company input in addition to input from other stakeholders, then revised it again. We did a test run, then improved it again for the first Index, then learned the lessons from the first Index, and improved it again.

And so because of its iterative and consultative nature, it doesn’t constitute any one person’s ideas or agendas. It’s a high bar, but it’s a reasonable bar. It’s achievable and it reflects a broad consensus of what a rights-respecting company ought to be doing.

ZR: Tell me what you’re working on right now. I know you’ve been doing a lot of Section 230 related work?

RM: So at the Wikimedia Foundation, the job of my team, who are responsible for public policy advocacy, is to advocate for laws and regulations and government behaviors that make it possible for people to edit Wikipedia – no matter who they are and where they are, without fear of being threatened or censored or sued and so on. In the U.S. Section 230 is an existential thing for us because, not only does it protect us from liability for what people post on Wikipedia, but it protects the right of volunteer editors to actually enforce rules that are context-appropriate without being sued and without us being sued. And so the entire model depends on Section 230. 

Whether any reforms to Section 230 would be okay, you’d really have to red-team it, and in great detail, to see what unintended consequences could arise for Wikipedia or other public interest platforms that do not rely on targeted advertising business models. Most Section 230 reform proposals focus on the large commercial platforms, and their authors aren’t thinking about the implications for non-profit or decentralized, community-run platforms like ours.

ZR: You can’t say, “Oh, it needs to stay exactly the same,” but it also ultimately has to be a tested, well considered change if it is changed.

RM: Exactly. So people ask “Well, could you accept any reform?” And It’s hard to say in general terms. 

Wikipedia is now subject to the EU’s Digital Services Act as a Very Large Online Platform, which means that the Wikimedia Foundation (as Wikipedia’s technical and legal host) is required to  do risk assessment and transparency reporting. Gee, sounds familiar (these are key parts of RDR’s standards). The DSA also requires grievance and remedy mechanisms. Again, sounds familiar. I think there could have been some RDR influence on what the DSA requires.  We were already doing these things so it’s mainly about strengthening our practices and making sure we are communicating them clearly, and addressing risks in the ways European regulators require.

Another important thing about the DSA is that we (Wikimedia) were in dialogue with European lawmakers throughout the drafting process. The final scope of the law took Wikimedia’s volunteer-run content moderation model into account: it only applies to content moderation rules set and enforced by the platform operators, not volunteer communities.

But I’m not convinced that U.S. lawmakers should consider DSA-style requirements  in the context of Section 230. . Of course, having a privacy law would help deal with a bunch of the issues that people are trying to solve with Section 230. (RDR’s analysis of targeted advertising and algorithmic systems reached a similar conclusion in 2020.) So we are saying to Congress: why don’t you all try the other things first before you mess with Section 230. But mainly, we’re just asking, “If you do want to revise Section 230 in a way that doesn’t hurt Wikipedia, then you need to bring us into the room when you’re drafting anything.”

ZR: I always use Wikipedia as the first thing I refer to when people ask: Why shouldn’t we change Section 230 or why shouldn’t we break it? It’s not a niche thing, everybody uses it. 

Well, this was great. Thank you and it was good to talk to you.

RM: Well, congratulations and I look forward to following what happens next.


In 2018, the Cambridge Analytica scandal helped propel the perils of surveillance capitalism into the mainstream. The following year, the release of Shoshana Zuboff’s pivotal book,
The Age of Surveillance Capitalism, cemented the issues of data privacy and targeted advertising as top problems of our time; not just for a bevy of experts, but for the public at large. It was in this context that Ranking Digital Rights released its first major report, It’s the Business Model. The report argued that the rampant misinformation and hate speech we were seeing perpetuated by social media companies was not the sole product of a lack of content moderation, and therefore could not be addressed through intermediary liability reform (in other words, by getting rid of Section 230).

Rather, it was argued that the pathologies of the online environment were the downstream result—a negative externality, in economics terms—of the incentives created by the industry’s targeted-advertising business model: collect and monetize all data, automate everything, scale up, and wait for the profits to roll in.

The report was influenced by recent changes that RDR had made to its methodology, as the consensus around these trends and their pervasiveness in the industry began to solidify. These changes included the addition of new indicators on algorithms and targeted advertising. As the report’s lead author, and Ranking Digital Rights’s former Policy Director, Nathalie Maréchal recalls, “the Big Tech business models had all kind of started to converge toward the collection and monetization of data, either for the purpose of advertising or for the purpose of AI development.” For these companies, the acquisition of data became both “a business imperative, and also an ideological imperative.” 

This was different from how things were back when the methodology for the first RDR Corporate Accountable Index was conceived in 2013. At the time, most of RDR’s indicators evaluated either “things that companies were doing at the behest of governments or things that basically amount to negligence [for example, poor data security].” But, since then, it had become clear, both to Maréchal and to RDR Founder Rebecca MacKinnon, that companies also made a lot of decisions based purely on their own self-interest. Meanwhile, Nathalie found herself fed up with the reigning policy discourse in D.C. and Brussels at the time, which gave the impression that “the only thing wrong with social media is that CEOs are insufficiently motivated to do content moderation correctly.”

Sara Collins, Senior Policy Counsel at Public Knowledge, agrees. For a while, most D.C. policy discussions centered around how platforms “may spread misinformation and threaten democracy” would, reflexively, also become about “how to get rid of Section 230.” As she explains, the report helped “thread the needle about why [data collection] has residual content harms.” This is especially important for organizations like Public Knowledge, which places a strong emphasis on free speech online.

Nathalie recalls a metaphor MacKinnon shared with her at the time: Performing only content moderation is like trying to remove pollutants in a stream using only a pipette. It was clearly going to be insufficient to clean up the polluted lake that was the vast networks of disinformation across platforms. The It’s the Business Model report was conceived of as part of a necessary narrative shift and drew on RDR’s new indicators to strengthen the connection between the business model and the harms RDR was observing directly through its company evaluations and close relationships with global civil society organizations.

Changing the Conversation: The Report Comes Out

Unfortunately, the report’s release event was planned for March 17, 2020, the day before the COVID-19 pandemic was declared; and the launch event was canceled. However, we know that the report had important implications across the policy sphere, with a number of allies reporting decisive effects of the report on their thinking about the business model.

Jesse Lehrich, Co-Founder of Accountable Tech, explained that “the It’s the Business Model report was really critical and ahead of its time as far as moving the advocacy community and policymakers to think beyond content moderation and deplatforming.” He describes the report as “formative” in shaping a lot of his organization’s work and, in particular, their “Ban Surveillance Advertising” campaign, which brought together over 50 organizations around the globe. A focus on the surveillance advertising business model has also served as a way to “break down silos” between different parts of the advocacy community, Jesse points out. 

Privacy advocates, civil rights groups, and anti-monopoly activists are sometimes at odds; but the business model was something they could all coalesce around. And this remains true as the community begins to grapple with the potential impacts of AI. In fact, the AI Now Institute, in a recent report, referenced Accountable Tech’s campaign to ban surveillance advertising as an important model. Though the arguments in RDR’s report spoke most clearly, at the time, to surveillance advertising, Nathalie Maréchal agrees that “today we see the same cold logic applied in the world of artificial intelligence and automated decision-making.”

Meanwhile, Jesse believes the report also played an important role in galvanizing legislation and regulatory frameworks that have come about since. He points, for example, to the inclusion of bans on targeted advertising of children and on the use of sensitive data for targeting in the EU’s Digital Services Act (DSA) as the kind of regulatory response that was made possible thanks, in part, to RDR’s work. Sara Collins agrees, noting that, “I do really think that [the report] has shaped how people are talking about the content space. You still obviously get the Section 230 bills, but now that’s not the only solution put forward.”

At the time of the report’s release, Anna Lenhart was working on tech oversight for Representative David Cicilline, in the House Judiciary Committee. One prime area of focus for her was ad targeting and ad libraries and understanding what kind of information is useful to measure ad targeting discrimination. By 2021, Anna was advising on a number of potential bills, including Congresswoman Lori Trahan’s Social Media Data Act, that took aim at the surveillance advertising industry. In particular, this act mandates that companies keep thorough ad libraries to help bring about transparency in ad targeting. One thing Anna was looking for while conducting her research were reports that provided examples of potentially problematic advertising and ad campaigns. And this was something she found in the Business Model report. “It’s always really helpful [to have examples] when you’re trying to tell the story to constituents or briefing members of Congress,” she explains.

That year, Nathalie was called upon to testify on the Hill and Anna requested her expertise during several meetings while the Congresswoman and her staff worked to craft the bill. Notably, Anna’s former boss, Congressman David Cicillini, himself also made several references to the “business model” during an antitrust hearing, while grilling leaders of the major Big Tech companies. Finally, at the international level, UN Special Rapporteur Irene Khan referenced the “business” model in an important report for the UN Human Rights Council report on “disinformation and freedom of opinion and expression.” 

Though the report was released at a time when other events and thinkers were helping to shift the conversation, RDR’s report played an essential role, at a pivotal moment, to help further popularize the idea of the “business model” as the real root of the growing problems of mis- and disinformation. Its release came at just the right time to help galvanize policymakers and civil society alike and to create a lasting imprint on ongoing policy conversations, conversations which have taken on new meaning and urgency with the growing AI arms race now upon us.


From its inception, Ranking Digital Rights’s standards and methodology were designed with investors in mind. Indeed, our Corporate Accountability Index
was devised almost a decade ago alongside ESG ratings provider Sustainalytics. Since then, RDR has aimed to ensure our standards would be usable for responsible investors interested in tackling growing concerns around the regulatory and human rights risks linked to Big Tech. As RDR Founder Rebecca MacKinnon mentioned in our inaugural Investors Research Note in 2017, though “digital rights issues [had] been hiding in plain sight for more than a decade,” the complexity of the issues involved had made “it hard for many investors to recognize the potential significance of specific abuses or to track evolving performance standards.” These sentiments were recently echoed by former RDR Investment Engagement Manager Jan Rydzak, who explained that benchmarks like RDR continue, today, to “highlight companies’ impact on rights that have often been neglected by existing ESG frameworks.”

In the investor community, “there’s this traditional view of what human rights are and what impacts human rights, including supply chain issues and worker safety issues,” Lydia Kuykendal, Director of Shareholder Advocacy at Mercy Investment Services, explained. “A lot of investors, a lot of people that do our work, have those more traditional views and do not feel comfortable with any type of tech, let alone cutting edge tech,” she continued. Therefore, having the kind of support that RDR provides has been “more important than in almost any other space.” Working with organizations like RDR is also particularly useful for those in the investor community who are working across different ESG issues, as Michela Gregory, Director of ESG Services at NEI Investments, added. Much of this work has been facilitated through RDR’s close working relationship with the Investor Alliance for Human Rights (The Investor Alliance).

RDR and the Investor Alliance for Human Rights Join Forces

The Investor Alliance was formed in 2018 as an initiative of the Interfaith Center on Corporate Responsibility (ICCR) to augment the number and capacity of global investors engaged on business and human rights concerns. The Investor Alliance’s work is centered on the UN Guiding Principles on Business and Human Rights, the same set of international principles that guides RDR’s work. ICCR’s genesis in the early 1970s came in response to Desmond Tutu’s call for religious and faith-based investors to divest from South Africa to pressure the government to abolish apartheid. Like her other colleagues in the responsible investment space, the Investor Alliance’s Director, Anita Dorett, found that, initially, most investors, and the businesses they engaged with, had a narrow view of human rights risk generally focused on supply chain concerns. The Investor Alliance’s decision to focus on human rights risks in the tech sector represented an important shift.

Meanwhile, RDR’s focus on digital rights and its alignment with the same international guiding principles made the two natural allies, Anita explained. In addition, she said, “we want to ensure all of our engagements are research-based and data-driven; comparative data is really important. So RDR was kind of an obvious choice for us.” RDR’s value-added was clear as soon as Anita started engaging with Founder Rebecca MacKinnon, who, she said, “poured her attention and her focus on investors utilizing the RDR data and really rallied around collaborating with us, understanding that the critical users of this data will be investors.”

Though, over the years, RDR continued to speak to investor needs—including through successive investor updates, it was the release of the Investor Alliance’s “Investor Statement on Corporate Accountability for Digital Rights” in 2021 that truly cemented RDR’s key role in helping to galvanize shareholder proposals around human rights concerns in the digital sphere. The statement, signed by 176 investors representing over US$9.2 trillion in investments, outlined the need for companies to adapt to “investor expectations in line with evaluations and recommendations of the 2020 Ranking Digital Rights Corporate Accountability Index,” in particular around privacy and freedom of expression. 

The Investor Alliance convenes and helps coordinate the collective work of a diverse group of investors. As its Director Anita explained, for this to be successful, “you need everybody on the same page sharing a common set of investor expectations.” Therefore, “the investor statement represents the articulation of investors’ expectations, based on the data RDR provides, and using RDR’s recommendations, with RDR’s expertise and analysis, to hold companies to account or to drive companies to fill in the gaps in their digital rights performance.” The decision by signatories to align their expectations around RDR’s work didn’t come as a surprise to Mercy Investments’s Kuykendal, who added that “familiarity and trust with RDR among the investor community made it easier for many to sign onto the statement.”

The statement also represented the culmination of growing investor interest in the potential digital risks presented by the tech sector. A first iteration of it, in 2019, garnered just under 50 signatories. But by 2021, interest in tech issues had increased significantly, Anita explained. During this time, shareholder proposals had been put forward for the first time at tech companies including Apple, Amazon, and Facebook (Meta), demanding everything from human rights policies to dedicated governance structures. And these helped to further grow awareness, even among investors who voted against them. Unsurprisingly, according to Anita, today “every time we speak to a new investor, they want to talk about tech.”

Before 2021, Lydia Kuykendal recalled that Mercy had done little in the tech space; most of their growing body of work in this space has indeed come through their affiliation with the Alliance. For NEI’s Michela Gregory, the statement has served as an important launchpad for the engagement and dialogues with companies that have come since. The Investor Alliance’s Digital Rights Engagement Initiative continues to coordinate outreach to RDR’s ranked companies, by the statement’s many signatories, which include NEI and Mercy Investments.

RDR Begins Supporting Key Proposals, Including at Meta

While RDR was first cited in a proxy resolution in 2020, we began directly supporting the crafting of such proposals in 2021. At Meta, for the second time running, shareholders recently voted on one of the most consequential RDR-supported proposals, calling for a human rights impact of the company’s targeted-advertising policies and practices. It has been one of the most successful in the company’s history, earning a strong majority of support from independent shareholders (those who are not founders/controlling shareholders). As we noted ahead of the vote on the original proposal in 2022, human rights impact assessments are essential for any company that is part of the “targeting ecosystem.” This is especially true of a company like Meta, which then accounted for more than a quarter of all U.S. digital ad spending.

The Meta proposal, which RDR helped prepare, was filed by Mercy Investments and co-filed by NEI Investments. “I don’t think I could have done it without RDR,” Lydia, who was the lead filer, explained. For her, a lot of RDR’s value-added has come from “tracking the legislation in the U.S., the EU, in Japan. I don’t have the capacity to do that. I don’t know other organizations that are particularly good at that.” She uses RDR’s data to track regulatory risks to companies like Meta for exempt solicitations—where shareholders are able to make a longer case for their resolution, and respond to company opposition—as well as to present these regulatory risks to investment giants like BlackRock and Vanguard, in the hope of attracting their large trove of investor votes.

Lydia recognizes the impact of the Meta proposal, which received “the second highest support apart from dual-class share voting.” She has noted, however, that, as long as multi-class share structures remain, “we’re never going to go anywhere.” These share structures give funders of companies inflated voting power at annual general meetings, and play a big role in limiting the success of human rights-based proposals. At Meta, CEO Mark Zuckerberg holds 61% of voting power, meaning he could single-handedly vote down any proposal. For this reason, Lydia is in “favor of investors really examining strategies to focus on a single issue, which is eliminating the dual-class share structure.”

And this is why RDR, alongside its support for individual proposals, has also been at the forefront of efforts to break down dual-class share structures. In 2022, RDR sent a letter to the U.S. Securities and Exchange Commission (SEC), signed by 20 other human and civil rights groups, urging an end to such structures, while pushing lawmakers to take action. Moving forward, RDR will continue to support shareholders in crafting proposals that put human rights at the forefront of company policy and practice while also advocating for governance structures that ensure investors are finally given a fair voice at the table.


When RDR’s first Corporate Accountability Index was released in 2015, grading 16 tech and telecom companies on their respect for privacy and freedom of expression, it was the first of its kind to rank the impact of companies on specifically digital rights. Unsurprisingly, it took the policy world by storm, including at international forums like the UN. Several U.S. media outlets also took an interest in what the Index revealed about the activities of large global platforms and telcos like Google and AT&T. Meanwhile, however, in the Majority World, many civil society organizations were taking note, instead, of the potential for these standards to hold local and subsidiary companies accountable closer to home.

The first adaptations of RDR’s methodology to study these local contexts globally began in 2018, when Arab region digital rights organization SMEX used the RDR methodology in a report on the state of digital rights for mobile users within Arab states. That same year, an adaptation was conducted examining mobile operators in Russia, while Internet Without Borders produced a report looking at the performance of large international telecom subsidiaries across Africa. In many cases since, adaptations have been carried out in countries or localities with little existing corporate accountability, particularly for the tech sector. Oftentimes, they have been used in precarious socio-political contexts.

In response to the enormous potential of this work, RDR began providing direct support to organizations worldwide in 2021. As Leandro Ucciferri, RDR’s Global Partnerships Manager, put it, “these projects are putting new companies under the spotlight, which have not received enough scrutiny from the digital rights community.” Since this work began, we’ve provided this support to adaptations covering 35 countries and 127 companies. Though adaptations have taken place under various grants, many recent projects have been conducted under the auspices of the Greater Internet Freedom (GIF) project. Among others, these include recent, successful projects across Africa as well as in Central Asia (another report was recently released covering Southeast Europe), both of which we’ll highlight below.

First Evaluations of Central Asian and African ICT Sector: Major Gaps in Human Rights

Internet shutdowns, executed through the telecom sector, are rampant in both of these regions. As we’ve recently detailed, politically motivated shutdowns have taken place frequently in Central Asia over recent years. In Kazakhstan, shutdowns were executed in 2022, after the breakout of mass protest, following on the heels of previous shutdowns in both 2021 and 2020. In addition, in both Uzbekistan and Kyrgyzstan, laws have been introduced aimed at imposing online censorship. In Africa, shutdowns have been weaponized recently in Tanzania and Zimbabwe, among other examples.

It is within this context of poor digital rights protections that RDR recently partnered with GIF and local partner organizations, in both Central and Southern Africa, as well as in Central Asia, to establish a baseline for corporate accountability. According to Mavzuna Abdurakhmanova, GIF’s Central Asia Digital Rights Consultant, her region’s report was notable for being the first of its kind. “From the civil society perspective, no one questioned the private sector on the protection of the digital rights of their users,” she explained. “Nobody was even thinking about asking questions about human rights to the business sector.” And yet company transparency and accountability is particularly important in such countries, where weak human rights protections alongside fragile democracies often lead to the participation of telcos and other ICT service suppliers in infringing on the basic rights of users.

In Central Asia, the report entitled “Ranking Digital Rights in Central Asia” was conducted by Tajikistan-based Civil Internet Policy Initiative (CIPI) and looked at digital rights across three sectors: telecom, e-commerce, and fintech companies. These companies were located in four countries: Kazakhstan, Tajikistan, Uzbekistan, and Kyrgyzstan.

A growing number of GIF reports have been released recently covering East, Central, as well as East and Southern Africa. In 2022, RDR supported local partner Paradigm Initiative (PIN) for the creation of their report “Ranking Digital Rights in Angola, Democratic Republic of Congo and Central African Republic” as well as Internet Freedom Lesotho’s report on “Digital Rights in Lesotho.” Paradigm Initiative focused on three telecommunication companies, one for each country they covered. The Lesotho report, meanwhile, covered four companies: two telcos and two financial companies. Building on previous success, a new report covers countries in Eastern Southern Africa: Uganda, Tanzania, Zimbabwe, and Zambia, examining the policies of top telecom operators.

The findings in these reports have been stark: Researchers discovered that companies will routinely point to government requests to excuse the high number of internet shutdowns that they adhere to. When she began working on this project, Wakesho Kililo, who leads GIF’s Africa work, wondered whether companies would actually have any policies in place to handle such demands. Unsurprisingly, she found that such policies were frequently missing. Transparency about potential actions taken in responding to censorship demands was also limited. Mavzuna recalled that almost all companies evaluated by the Central Asia report were responding to government requests and providing personal information of users, but there was neither a publicly available policy about how they responded to these, nor general data provided on the number of requests received or responded to.

Across the board, these company gaps were clear and gaping. According to Mavzuna, no companies covered by the research publish annual reports on their websites. There were no companies publishing information about their governance structure. Wakesho, meanwhile, noted that Terms of Use are also rarely comprehensive. In fact, her research across Africa showed that a majority of companies there were failing to translate Terms of Use into local languages. In addition, many of the companies Wakesho helped evaluate have privacy policies that are sorely lacking, and sometimes non-existent. This was true, for example, of NetOne Zimbabwe, which has no privacy policy for any of its services investigated for the report. Wakesho remembered thinking, as she completed her research, that “users’ digital rights are being abused. They’re not being protected, either at all or at the rate they should be.” She added, “When a telco doesn’t even have a privacy policy, what are they doing with user data? We don’t know.”

Another pattern of note has appeared across RDR-supported adaptations, and was also evident in both of these regions: Parent companies of corporations based in Western Europe often offer more robust human rights policies to their clients at home than they do to the users of their subsidiary companies abroad. GIF Central Asia’s Mavzuna explained that, when they conducted a comparative data analysis, many good policies and practices of parent companies were notably absent in those of their subsidiaries. “Why didn’t you take those good policies and good practices from European parent companies and adapt them to our local context?” Mavzuna wondered. (For more on this discrepancy, please check out our essay from the 2022 Telco Giants ScorecardThe disconnect between HQ and local subsidiaries results in less transparency and protection.”)

A First Experience with Company Engagement: Is Change Really Possible?

Following the completion of their reports, Mavzuna and Wakesho were determined to engage with companies—as RDR does following the release of our Scorecards—with the hope of influencing new company policies. In all cases, many of the largest telecom companies simply ignored them. However, smaller companies (and a couple of larger telcos) expressed interest in improving their policies based on the findings of the reports.

For example, Mavzuna was able to attend a meeting with SMEs (small to medium-sized enterprises) in Tajikistan where the Central Asia report findings were presented for the first time. There, a representative for Alif Bank, a fintech company, expressed appreciation for the report and the important gaps it uncovered, which he promised to address. Though she didn’t engage directly in Uzbekistan, her local partner in that country reported a positive reception; many within that country’s private sector are aware of a positive correlation between improved human rights respect and investment from Western countries.

Meanwhile, in Lesotho, the report’s researcher, Nthabiseng Pule, was able to meet directly with representatives from Vodacom Lesotho and Vodafone. Not only that, but the company also heeded some of the report’s top recommendations. This included hiring a language expert to translate the company’s terms and conditions into Sesotho, Lesotho’s main language. In addition, the company agreed to create a privacy portal, where one can find all information relating to user privacy. This represented an important win and first step for the organization. Paradigm Initiative was also able to meet with France-based Orange, a telco with subsidiaries across the African continent. Finally, GIF, with support from the Global Network Initiative, was also able to meet with Vodacom Tanzania.

Engagement also extended beyond companies. In Lesotho, a policy brief was sent to the country’s regulator, the Lesotho Communications Authority, noting the report’s main findings. Meanwhile, the report also opened many eyes in civil society. One organization, Internet Society Lesotho, began a campaign around the right to privacy, based on the report. Wakesho also highlighted engagement at international forums, including the Forum on Internet Freedom (FIF) Africa, the Digital Rights and Inclusion Forum, and RightsCon 2023, where she got to share the reports’ findings with civil society as well as with company representatives, some of which even requested their companies be evaluated in the future.

According to Mavzuna, she and her colleagues are now feeling “more confident” after receiving positive feedback from the private sector. Mavzuna hopes to continue engaging with companies interested in implementing more robust human rights protection. Indeed, she believes this is just the beginning for this kind of engagement. Although there was a lack of willingness from larger companies to engage during this first round, she maintains hope based on the interest she’s witnessed mostly from smaller companies, including from the region’s nascent e-commerce sector, where companies are mostly younger and more dynamic.

Despite initial successes, Wakesho believes that “we need more of this work. There’s more to be done. We need the regulators to be aware… We need to call the companies to account.” Both Wakesho and Mavzuna believe this must involve eventually getting the larger telcos to the table and holding them accountable for the same level of human rights protections they offer users in their home country. And RDR hopes to continue supporting them in that task.

As RDR’s Leandro Ucciferri has explained, “By making it easier to use our methods and standards, we aim to grow the community of corporate accountability advocates that are bringing these conversations to new countries and regions outside the Silicon Valley and Brussels bubbles.” Indeed, we’ve already doubled down on our commitment to support this movement globally, including through last year’s launch of the RDR Research Lab, a hub for digital rights researchers and experts across the world. This represents, however, just one part of our commitment to ensuring the full democratization of the global tech accountability movement, as we continue to help global allies hold the ICT sector to account for the rights of all users, everywhere.