With the release of her pivotal Consent of the Networked in 2012, RDR Founder Rebecca MacKinnon issued a call to action for civil society to defend our digital rights from companies with the same rigor with which we’ve previously fought for our rights before governments. In 2013, Rebecca answered the call directly by launching Ranking Digital Rights based on the idea that there needed to be a standard against which companies would be measured for adherence to human rights.

Whereas RDR first encountered strong resistance from companies, today, through sustained engagement, it has helped create a “race to the top,” propelling Big Tech and Telco giants toward improved policies that better protect human rights in the digital sphere, from stronger privacy protections to companies’ first-ever transparency reports. Since 2020, RDR’s data has been cited with growing frequency in increasingly common shareholder proposals, at companies from Meta to Alphabet, put forward by the responsible investor community to hold the tech sector accountable.

While RDR has seen its influence grow among companies and investors, it has also been employed with increasing frequency to hold smaller tech companies, as well as subsidiaries of the companies evaluated by our Scorecards, accountable to marginalized populations as well as to citizens living in complex socio-political conditions, from Iran to Russia, from Southeast Europe to South and Central Asia, and by LGBTQ advocacy organization GLAAD in the U.S.

To mark a decade of milestones, you can now browse our Decade of Tech Accountability in Action page. Here’s what you’ll find:

Take a look back at our top 5 accomplishments over 10 years: With over a decade of work, there’s a lot to choose from. So what are we at RDR most proud of? See what made the cut among our top 5 accomplishments over 10 years.

RDR Looks to the Future: To mark this important occasion, RDR held its first-ever in-person retreat this spring. Check out our strategic priorities as we step into our second decade.

A Conversation with RDR Funder Rebecca MacKinnon: From RDR’s early days to her proudest RDR moments to her current work with the Wikimedia Foundation, check out this can’t-miss conversation between Rebecca and RDR’s Research Manager Zak Rogoff.

Testimonials: RDR has worked with everyone from global civil society to investors to companies to achieve our priorities. Take a look at what our partners and stakeholders have to say about working alongside RDR.

The RDR Timeline: Take a look back at some of our most pivotal moments.

Deep Dive into 10 Years of Impact: Hear first-hand from some of the partners and actors who have had the most lasting impact on our work, and on the digital rights field at large. Check out the following 5 stories to discover how RDR works with global civil society partners, companies, policymakers, and investors to push for change:

If you’re interested in where RDR is headed, check out our call for consultation on our upcoming standards for generative AI →


RDR Issues Call for Consultation on Generative AI Accountability Standards

In the fall of 2023, Ranking Digital Rights will release the inaugural Generative AI Accountability Scorecard, a report card roadmap for consumer-facing generative AI services to respect the human rights to privacy, non-discrimination, freedom of expression, and freedom of information. The Scorecard will rely on this set of draft indicators as a basis for scoring.

For more information on this project, including the rationale for its creation, please check out our June 2023 report published alongside RDR’s set of preliminary standards on generative AI, which formed the basis for these draft indicators.

Before we transform this draft into final indicators and conduct our evaluation, we’re looking for input from civil society experts and partners. If you’re interested in providing feedback, please email us at methodology@rankingdigitalrights.org.

You may also contact our Research Manager Zak Rogoff at rogoff@rankingdigitalrights.org.

For more info, please check out our full draft indicators.


Among Censorship and Shutdowns, RDR’s Methodology Is Spotlighting the Need for Transparency in Central Asia

In January 2022, protests broke out in Kazakhstan due to rising gas prices. Protestors, who the government accused of attempting a coup, were met with violent repression from law enforcement authorities and armed groups. Access to the internet was also quickly restricted, as it had been previously in both 2021 and 2020. A similar tendency toward restricting online freedoms has been noted in Uzbekistan and Kyrgyzstan.

It is within this context that RDR supported local Central Asian digital rights organization the Public Fund Civil Internet Policy Initiative (CIPI) in researching whether and how tech companies—including telcos, e-commerce platforms, and fintechs—in Central Asia have committed to respecting human rights and protecting their users.

Though their research points to several gaps in company policies, perhaps the most glaring omission observed is their failure to provide transparency on policies for responding to government requests for internet shutdowns, despite the imminent threat these present to human rights across the region. Secondly, CIPI’s research further highlights a trend RDR has previously reported on: the tendency of large telecommunication companies, often based in Western Europe, to provide fewer human rights protections to the users of their subsidiaries abroad. This is a gap that must immediately be bridged.

Read more about the impact of these companies on human rights in Central Asia, and how CIPI is working to improve human rights policies in the region. →

For even more info, check out the full report. →


Is Momentum on Tech Shareholder Activism Stalling? How to Reinvigorate It in 2024

Though support for investor proposals at annual AGMs (annual general meetings) was down overall this year, the overall number of human rights-based proposals was up – by 14% since 2020. This is just one of many emerging trends that those in the responsible investor community should take note of as we work to improve support during next year’s round of meetings.

In a recent post, we highlighted a number of patterns that may have impacted support in 2023. These include:

  • Ever-present barriers like the dual-class share systems at companies like Meta and Alphabet continue to give founders extremely inflated voting power and play a role in limiting proposals’ success.
  • After last year’s period of uncertainty in the tech sector, shareholders may also be refocusing their attention on proposals where risks can be more clearly linked to companies’ business outcomes and materiality.
  • While the number of overall proposals has increased, the number of specifically anti-ESG proposals has skyrocketed. Where they were almost nonexistent before 2020, over 50 were filed in 2023. In some cases, proposals put forward by “anti-ESG” groups may have created confusion by employing similar language to those that call for politically agnostic disclosures on human rights issues.
  • Large institutional investors often rely on analysis from ESG ratings agencies to inform voting decisions. As we discussed in a recent piece, these ratings often lack the grounding in international human rights standards that the RDR Corporate Accountability Index and other human rights benchmarks offer.

In the coming weeks, we’ll be joining forces with the Investor Alliance for Human Rights’s Anita Dorett to take a deeper look into some of the results at this year’s meetings and share how investors and our partners can learn from the trends of the past two years to create stronger and more effective shareholder advocacy at Big Tech companies in 2024.

Read more on this and on the specifics of RDR-supported proposals at Meta, Alphabet (Google), and Amazon. →


RDR Media Hits

NPR: RDR’s Research Manager Zak Rogoff spoke to NPR about shareholder advocacy ahead of Big Tech annual general meetings.

Read More at NPR

 


Support Ranking Digital Rights!

If you’re reading this, you probably know all too well how tech companies wield unprecedented power in the digital age. RDR helps hold them accountable for their obligations to protect and respect their users’ rights.

As a nonprofit initiative that receives no corporate funding, we need your support. Do your part to help keep tech power in check and make a donation. Thank you!

Donate

 

Subscribe to get your own copy.


Since 2018, organizations across various socio-political contexts have adapted RDR’s methodology to hold platforms and other ICT services accountable. What do they each have in common? Protecting the rights of some of the globe’s most marginalized and discriminated against populations. Two of the most successful and notable cases of adaptations include: measuring how well social media companies protect LGBTQ rights online and evaluating messaging apps under conditions of state repression in Iran.

The former was conducted by American LGBTQ media advocacy organization GLAAD to create their now yearly Social Media Safety Index (SMSI), which includes a Platform Scorecard evaluating five top platforms (Facebook, Instagram, Twitter, YouTube, and TikTok) on how well they protect LGBTQ users. Some of the main issues identified by Jenni Olson, the Senior Director of the Social Media Safety Program, as harming LGBTQ users include: inadequate content moderation and enforcement, harmful algorithms, and a lack of transparency. The SMSI helps shed light on how online hate speech, as well as misinformation and conspiracy theories, toward LGBTQ people is able to spread and manifest online unchecked. Each year, the SMSI has made headlines. But the 2023 SMSI stirred the most controversy for the large drop in score experience by Twitter. The company, now under the helm of Elon Musk, fell 12 points (all other platforms saw their score increase) and became the most dangerous company for LGBTQ people this year.

The second adaptation was used to create a report called “Digital Rights & Technology Sector Accountability in Iran” through a joint collaboration between Filterwatch and Taraaz, examining both local and international messaging apps. When Roya Pakzad, Taraaz’s Founder and Director, last spoke to us about her work on the report, protests were ongoing over the death of 22-year-old Mahsa Amini, who died in police custody after being arrested over the “improper” wearing of her hijab. The mass mobilization that followed was sparked over social media and was met with internet shutdowns and outages. Meanwhile, the government was being accused of pushing through the “draconian” Internet User Protection Bill, which may vastly curtail what Iranians could access on the web.

These two conversations with Jenni and Roya were first published in 2022, as part of our interview series, Digital Rights Dialogues. Among other things, we spoke to them about why they chose to use the RDR methodology, how they adapted it, and how it is helping them achieve their goals:

Jenni Olson: We had been thinking about doing a scorecard and trying to decide how to go about that. We knew that we wanted to lean on someone with greater expertise. We looked to Ranking Digital Rights as an organization that is so well respected in the field. We wanted to do things in a rigorous way. We connected with RDR and you guys were so generous and amenable about partnering. RDR then connected us with Goodwin Simon Strategic Research, with Andrea Hackl [a former research analyst with RDR] as the lead research analyst for the project. That was such an amazing process and, yes, a lot of work. With Andrea, we went about developing the 12 unique LGBT-specific indicators and then Andrea attended some meetings with leaders at the intersection of LGBT, tech, and platform accountability and honed those indicators a little more and then dug into the research. For our purposes, the scorecard seemed like a really powerful way to illustrate the issues with the platforms and have them measured in a quantifiable way.

Though it’s called the “Social Media Safety Index,” we’re looking not only at safety, but also at privacy and freedom of expression. We developed our indicators by looking at a couple of buckets. The first being hate and harassment policies: Are LGBTQ users protected from hate and harassment? The second area was around privacy, including data privacy. What user controls around data privacy are in place? How are we being targeted with advertising or algorithms? Then the last bucket would be self-expression in terms of how we are, at times, disproportionately censored. Finally, there is also an indicator around user pronouns: Is there a unique pronoun field? Due to lack of transparency, we can’t objectively measure enforcement.

What we end up hearing the most about is hate speech, but it’s important to note that LGBTQ people are also disproportionately impacted by censorship. We’re not telling the platforms to take everything down. We’re simply asking them to enforce the rules they already have in place to protect LGBTQ people from hate. 

I’m not naïve enough to believe that the companies are just going to read our recommendations and say “Oh wow, thank you, we had no idea, we’ll get right on that, problem solved, we’re all going home.” This kind of work is what GLAAD has done since 1985: create public awareness and public pressure and maintain this public awareness and call attention to how these companies need to do better.

There are times when it feels so bad and feels so despairing like, “Oh, we had this little tiny victory but everything else feels like such a disaster.” But then I remind myself: This is why this work is so important. We do have small achievements and we have to imagine what it would be like, how much worse things would be, if we weren’t doing the work. I’m not naïve that this is going to create solutions in simple ways. It is a multifaceted strategy and, as I mentioned a minute ago, it is also really important that we’re working in coalition with so many other civil society groups, including with Ranking Digital Rights. It’s about creating visibility, creating accountability, and creating tools and data out of this that other organizations and entities can use. A lot of people have said, “We’re using your report, it’s valuable to our work.”

Roya Pakzad: For a long time now, the Iranian digital rights ecosystem has been Iranian people resisting government censorship and the Iranian government trying to censor the internet. If you read literature from 2008 until 2016, you see that civil society wasn’t really focusing on the role of companies in their digital rights advocacy. The focus [of the report] was mainly on government censorship. So we wanted to say, “Oh no, there are so many actors in the middle, and we have to focus on them because they have a responsibility too.” Part of that was just introducing the idea of corporate social responsibilities.

We wanted to introduce GNI (Global Network Initiative), a non-governmental organization that assists companies in respecting freedom of expression and privacy rights when faced with pressure from governments to hand over user data or remove or restrict content, into the conversation. We wanted to introduce multi-stakeholder engagement. We wanted to introduce human rights impact assessments. We wanted to introduce Ranking Digital Rights’s great index and show that you can use that for evaluating yourself as a company, or journalists can use it to evaluate you. That’s why we didn’t just pick certain indicators [from Ranking Digital Rights’s methodology, we used all of them, because the main purpose was an educational approach with the idea of business and human rights, introducing all of the ideas of human rights due diligence and human rights impact assessment policies.

We did have to adapt for the context of Iran and its current lack of discussion about business and human rights. And the other thing we noted is e-government services being an add-on to other services. The government incentive to use Iranian messaging apps also means you pay less than you do to use Telegram or WhatsApp, because the data that you pay for costs less than data to access foreign apps. If you don’t have enough money to pay for VPNs, it means that you can only use Iranian apps, which penalizes people because of their socio-economic status, as the government changes the tariff for data, for example. In the context of Iran, we had to pay attention to the narrative that we use and explain why we are using privacy and freedom of expression indicators and mixing them with a discussion of the socio-economic context.

[A colleague] and I also recently worked with the Iran Academia’s MOOCs program to record a lecture based on our RDR report. We have seen a lot of attention directed at the role of technology companies and technologists in digital rights in Iran. The gap that we saw back in 2017, with regards to the lack of attention to the private sector, has been shrinking dramatically in just a year. We have seen so much mobilizing, dialogue, and resistance from the tech ecosystem in Iran against government policy, like tech companies putting up banners on their websites publicly announcing their objection to the [Internet User Protection bill]. There have also been cases of naming and shaming public-private partnerships and contracts.

Companies have told us, informally and through back channels, that they are interested in using the workbook [we produced with the report] to revise their policies and update them. A non-ranked company even asked me to give a talk in their forums and for their employees (which I decided not to do, because I was worried about getting them in trouble). We have seen ICT journalists inside the country using approaches from the RDR Index to compare company policies.

Company engagement is something that we learned a lot from. The companies that we evaluated completely ignored us, to be honest with you. Sometimes we saw that some people from the evaluated company added us on LinkedIn. So we knew that they read the report, but they didn’t engage, even though we contacted them over email, we sent Twitter messages, we sent LinkedIn messages.

But non-evaluated companies, such as marketplace apps, said, “Oh we want to update our policies and we will use the workbook.” Because they were not evaluated they were like, ‘Okay, we are safe.’ They interacted with us and with journalists and students in tech policy; they were interested.

If you are interested in adapting the RDR methodology to achieve your corporate accountability goals in the tech sector, please get in touch at partnerships@rankingdigitalrights.org or visit our Research Lab for more information.

Zak Rogoff: Can you start by giving me your observations of what RDR is like now compared to the beginning, when you founded it?

Rebecca MacKinnon: In 2013, it was just an idea getting off the ground. We didn’t actually have an index of any kind until 2015. And in 2013, I was the only full-time employee. In the first half of 2013, I had a collaborative partnership going on with the University of Pennsylvania and a bit of funding to support some interns and fellows, and Allon Bar who I hired on contract in late 2013. So it was a very shoestring operation. We kind of cobbled it together with band-aids and paper clips and scotch tape.

We just had the idea that there needed to be a standard against which companies should be evaluated for respecting human rights. It needed to be something that made sense to investors. It needed to be something that resembled rankings and ratings of companies on other issues and in other industries. It needed to learn from what others had done in terms of what was effective and what was not.

We didn’t have the funds to actually produce the ranking for the first couple years. So we took our time doing a lot of research and consultation and producing iterative drafts of criteria before it actually became a methodology. So, in the summer of 2013, I was working with Tim Libert and Hae-in Lim, some students, and also with some researchers who were funded by Internews, to just take some draft criteria and test them out by looking at companies in some different countries and regions to figure out which criteria even were measurable or made any sense to evaluate, and which types of criteria didn’t work. Just to get some basic understanding of what made sense to include if the purpose was to incentivize company improvement and not just create sensational reports about outrageous things.

We learned a lot, but we also faced some resistance.  A number of companies now have pointed to their results in the Index, and pointed to their improvement and to RDR’s work as being useful and constructive. But in 2013, some of the same companies were not happy about the idea of a public ranking when they learned about it, and tried to convince me it was not a good idea. 

ZR: That’s a great story. And now they’re using it.

RM: At the time, a number of companies found this idea to be quite threatening. And today some of those same companies are using the Index and the methodology internally and, at least privately, if not publicly, acknowledge that is very helpful. 

ZR: Usually, when I explain to people why I think what we do makes a difference, the thing that I start with is that there are companies that actually talk to us and they say, “Look, we made this change because you asked us to do it.” It’s satisfying to know that some of those who use it now felt differently back then because obviously, as you know, there are still companies that won’t give us the time of day. But clearly that can change.

RM: Yeah, I mean I really knew that we were succeeding when a company that wasn’t in the Index approached me and asked if they could be in the Index. We only had so many resources for so many companies that we had to prioritize. We just weren’t able to include them. 

ZR: Tell me, what was your proudest moment working at RDR? 

RM: Well, there were a lot of proud moments so it’s hard to pick one. When investors started citing our data in shareholder resolutions, that was a very proud moment. When we saw Apple actually making changes in response to the shareholder resolution that cited our data, that was a very proud moment.

Another very proud moment was seeing civil society and research groups around the world using the methodology to hold industry in their regions accountable. When SMEX, in the Middle East/North Africa region, applied our methodology to telcos in that region and showed just how little was being disclosed and just how poor the policies were, that was a very proud moment. It showed that the methodology can be used in a lot of different ways. It’s not just about people in a privileged country applying criteria to people and to companies in other parts of the world. But it’s people in their own regions able to use these criteria to empower themselves to advocate with companies in their region to protect their rights better. That was a really proud moment, when we were able to see how the methodology was being picked up by people in lots of different parts of the world.

ZR: I feel like that’s one of the parts of the work that’s grown the most exponentially. I can’t even keep track of how many pots we have in the fire with different people doing adaptations of our methodology. 

RM: What’s made me most proud, especially since I’ve left and seen more adaptations coming out, is just knowing that this methodology and the work we did together – the work you all have done together since –-that impact we’ve made lives on no matter what. The methodology, the way of thinking about human rights, digital rights, and corporate accountability – RDR has left an indelible mark that’s going to continue to evolve through all kinds of research. It’s had a huge impact on how investors think about these issues. And, so no matter how RDR evolves from here, it’s going to live on in really interesting ways. The impact is going to continue to spread in ways that will be hard to predict.

And one thing that I really like is that it’s not centralized. We don’t control the methodology, we don’t control how it’s used. On one hand that might seem scary, because who knows who might do what in what strange way. But on the other hand, the fact that we don’t control it in a centralized fashion and that we’re not trying to control the IP, we’re not trying to license the use of the methodology, means that it can’t die. People are free to build off of it in different directions, in ways that empower different groups of people to use the leverage that they have available, to bring about change by companies. 

ZR: My last question on this topic is, what do you think is unique about it to this day? Obviously, when you started it, it was completely unique. There was nothing like it. But there are more people looking at these issues than there were back then. So what do you see as unique about it, especially now that you have some distance from it? 

RM: I guess one thing that’s unique about it, is that it represents the input of a lot of people from many different fields, from many different parts of the world. We were not just a group of experts who sat down and figured out a methodology. We had a few ideas, but we workshopped them with people from a range of different countries and regions with a range of different types of expertise. We then took that and workshopped it again, talked to companies about it and got a lot of company input in addition to input from other stakeholders, then revised it again. We did a test run, then improved it again for the first Index, then learned the lessons from the first Index, and improved it again.

And so because of its iterative and consultative nature, it doesn’t constitute any one person’s ideas or agendas. It’s a high bar, but it’s a reasonable bar. It’s achievable and it reflects a broad consensus of what a rights-respecting company ought to be doing.

ZR: Tell me what you’re working on right now. I know you’ve been doing a lot of Section 230 related work?

RM: So at the Wikimedia Foundation, the job of my team, who are responsible for public policy advocacy, is to advocate for laws and regulations and government behaviors that make it possible for people to edit Wikipedia – no matter who they are and where they are, without fear of being threatened or censored or sued and so on. In the U.S. Section 230 is an existential thing for us because, not only does it protect us from liability for what people post on Wikipedia, but it protects the right of volunteer editors to actually enforce rules that are context-appropriate without being sued and without us being sued. And so the entire model depends on Section 230. 

Whether any reforms to Section 230 would be okay, you’d really have to red-team it, and in great detail, to see what unintended consequences could arise for Wikipedia or other public interest platforms that do not rely on targeted advertising business models. Most Section 230 reform proposals focus on the large commercial platforms, and their authors aren’t thinking about the implications for non-profit or decentralized, community-run platforms like ours.

ZR: You can’t say, “Oh, it needs to stay exactly the same,” but it also ultimately has to be a tested, well considered change if it is changed.

RM: Exactly. So people ask “Well, could you accept any reform?” And It’s hard to say in general terms. 

Wikipedia is now subject to the EU’s Digital Services Act as a Very Large Online Platform, which means that the Wikimedia Foundation (as Wikipedia’s technical and legal host) is required to  do risk assessment and transparency reporting. Gee, sounds familiar (these are key parts of RDR’s standards). The DSA also requires grievance and remedy mechanisms. Again, sounds familiar. I think there could have been some RDR influence on what the DSA requires.  We were already doing these things so it’s mainly about strengthening our practices and making sure we are communicating them clearly, and addressing risks in the ways European regulators require.

Another important thing about the DSA is that we (Wikimedia) were in dialogue with European lawmakers throughout the drafting process. The final scope of the law took Wikimedia’s volunteer-run content moderation model into account: it only applies to content moderation rules set and enforced by the platform operators, not volunteer communities.

But I’m not convinced that U.S. lawmakers should consider DSA-style requirements  in the context of Section 230. . Of course, having a privacy law would help deal with a bunch of the issues that people are trying to solve with Section 230. (RDR’s analysis of targeted advertising and algorithmic systems reached a similar conclusion in 2020.) So we are saying to Congress: why don’t you all try the other things first before you mess with Section 230. But mainly, we’re just asking, “If you do want to revise Section 230 in a way that doesn’t hurt Wikipedia, then you need to bring us into the room when you’re drafting anything.”

ZR: I always use Wikipedia as the first thing I refer to when people ask: Why shouldn’t we change Section 230 or why shouldn’t we break it? It’s not a niche thing, everybody uses it. 

Well, this was great. Thank you and it was good to talk to you.

RM: Well, congratulations and I look forward to following what happens next.


In 2018, the Cambridge Analytica scandal helped propel the perils of surveillance capitalism into the mainstream. The following year, the release of Shoshana Zuboff’s pivotal book,
The Age of Surveillance Capitalism, cemented the issues of data privacy and targeted advertising as top problems of our time; not just for a bevy of experts, but for the public at large. It was in this context that Ranking Digital Rights released its first major report, It’s the Business Model. The report argued that the rampant misinformation and hate speech we were seeing perpetuated by social media companies was not the sole product of a lack of content moderation, and therefore could not be addressed through intermediary liability reform (in other words, by getting rid of Section 230).

Rather, it was argued that the pathologies of the online environment were the downstream result—a negative externality, in economics terms—of the incentives created by the industry’s targeted-advertising business model: collect and monetize all data, automate everything, scale up, and wait for the profits to roll in.

The report was influenced by recent changes that RDR had made to its methodology, as the consensus around these trends and their pervasiveness in the industry began to solidify. These changes included the addition of new indicators on algorithms and targeted advertising. As the report’s lead author, and Ranking Digital Rights’s former Policy Director, Nathalie Maréchal recalls, “the Big Tech business models had all kind of started to converge toward the collection and monetization of data, either for the purpose of advertising or for the purpose of AI development.” For these companies, the acquisition of data became both “a business imperative, and also an ideological imperative.” 

This was different from how things were back when the methodology for the first RDR Corporate Accountable Index was conceived in 2013. At the time, most of RDR’s indicators evaluated either “things that companies were doing at the behest of governments or things that basically amount to negligence [for example, poor data security].” But, since then, it had become clear, both to Maréchal and to RDR Founder Rebecca MacKinnon, that companies also made a lot of decisions based purely on their own self-interest. Meanwhile, Nathalie found herself fed up with the reigning policy discourse in D.C. and Brussels at the time, which gave the impression that “the only thing wrong with social media is that CEOs are insufficiently motivated to do content moderation correctly.”

Sara Collins, Senior Policy Counsel at Public Knowledge, agrees. For a while, most D.C. policy discussions centered around how platforms “may spread misinformation and threaten democracy” would, reflexively, also become about “how to get rid of Section 230.” As she explains, the report helped “thread the needle about why [data collection] has residual content harms.” This is especially important for organizations like Public Knowledge, which places a strong emphasis on free speech online.

Nathalie recalls a metaphor MacKinnon shared with her at the time: Performing only content moderation is like trying to remove pollutants in a stream using only a pipette. It was clearly going to be insufficient to clean up the polluted lake that was the vast networks of disinformation across platforms. The It’s the Business Model report was conceived of as part of a necessary narrative shift and drew on RDR’s new indicators to strengthen the connection between the business model and the harms RDR was observing directly through its company evaluations and close relationships with global civil society organizations.

Changing the Conversation: The Report Comes Out

Unfortunately, the report’s release event was planned for March 17, 2020, the day before the COVID-19 pandemic was declared; and the launch event was canceled. However, we know that the report had important implications across the policy sphere, with a number of allies reporting decisive effects of the report on their thinking about the business model.

Jesse Lehrich, Co-Founder of Accountable Tech, explained that “the It’s the Business Model report was really critical and ahead of its time as far as moving the advocacy community and policymakers to think beyond content moderation and deplatforming.” He describes the report as “formative” in shaping a lot of his organization’s work and, in particular, their “Ban Surveillance Advertising” campaign, which brought together over 50 organizations around the globe. A focus on the surveillance advertising business model has also served as a way to “break down silos” between different parts of the advocacy community, Jesse points out. 

Privacy advocates, civil rights groups, and anti-monopoly activists are sometimes at odds; but the business model was something they could all coalesce around. And this remains true as the community begins to grapple with the potential impacts of AI. In fact, the AI Now Institute, in a recent report, referenced Accountable Tech’s campaign to ban surveillance advertising as an important model. Though the arguments in RDR’s report spoke most clearly, at the time, to surveillance advertising, Nathalie Maréchal agrees that “today we see the same cold logic applied in the world of artificial intelligence and automated decision-making.”

Meanwhile, Jesse believes the report also played an important role in galvanizing legislation and regulatory frameworks that have come about since. He points, for example, to the inclusion of bans on targeted advertising of children and on the use of sensitive data for targeting in the EU’s Digital Services Act (DSA) as the kind of regulatory response that was made possible thanks, in part, to RDR’s work. Sara Collins agrees, noting that, “I do really think that [the report] has shaped how people are talking about the content space. You still obviously get the Section 230 bills, but now that’s not the only solution put forward.”

At the time of the report’s release, Anna Lenhart was working on tech oversight for Representative David Cicilline, in the House Judiciary Committee. One prime area of focus for her was ad targeting and ad libraries and understanding what kind of information is useful to measure ad targeting discrimination. By 2021, Anna was advising on a number of potential bills, including Congresswoman Lori Trahan’s Social Media Data Act, that took aim at the surveillance advertising industry. In particular, this act mandates that companies keep thorough ad libraries to help bring about transparency in ad targeting. One thing Anna was looking for while conducting her research were reports that provided examples of potentially problematic advertising and ad campaigns. And this was something she found in the Business Model report. “It’s always really helpful [to have examples] when you’re trying to tell the story to constituents or briefing members of Congress,” she explains.

That year, Nathalie was called upon to testify on the Hill and Anna requested her expertise during several meetings while the Congresswoman and her staff worked to craft the bill. Notably, Anna’s former boss, Congressman David Cicillini, himself also made several references to the “business model” during an antitrust hearing, while grilling leaders of the major Big Tech companies. Finally, at the international level, UN Special Rapporteur Irene Khan referenced the “business” model in an important report for the UN Human Rights Council report on “disinformation and freedom of opinion and expression.” 

Though the report was released at a time when other events and thinkers were helping to shift the conversation, RDR’s report played an essential role, at a pivotal moment, to help further popularize the idea of the “business model” as the real root of the growing problems of mis- and disinformation. Its release came at just the right time to help galvanize policymakers and civil society alike and to create a lasting imprint on ongoing policy conversations, conversations which have taken on new meaning and urgency with the growing AI arms race now upon us.


From its inception, Ranking Digital Rights’s standards and methodology were designed with investors in mind. Indeed, our Corporate Accountability Index
was devised almost a decade ago alongside ESG ratings provider Sustainalytics. Since then, RDR has aimed to ensure our standards would be usable for responsible investors interested in tackling growing concerns around the regulatory and human rights risks linked to Big Tech. As RDR Founder Rebecca MacKinnon mentioned in our inaugural Investors Research Note in 2017, though “digital rights issues [had] been hiding in plain sight for more than a decade,” the complexity of the issues involved had made “it hard for many investors to recognize the potential significance of specific abuses or to track evolving performance standards.” These sentiments were recently echoed by former RDR Investment Engagement Manager Jan Rydzak, who explained that benchmarks like RDR continue, today, to “highlight companies’ impact on rights that have often been neglected by existing ESG frameworks.”

In the investor community, “there’s this traditional view of what human rights are and what impacts human rights, including supply chain issues and worker safety issues,” Lydia Kuykendal, Director of Shareholder Advocacy at Mercy Investment Services, explained. “A lot of investors, a lot of people that do our work, have those more traditional views and do not feel comfortable with any type of tech, let alone cutting edge tech,” she continued. Therefore, having the kind of support that RDR provides has been “more important than in almost any other space.” Working with organizations like RDR is also particularly useful for those in the investor community who are working across different ESG issues, as Michela Gregory, Director of ESG Services at NEI Investments, added. Much of this work has been facilitated through RDR’s close working relationship with the Investor Alliance for Human Rights (The Investor Alliance).

RDR and the Investor Alliance for Human Rights Join Forces

The Investor Alliance was formed in 2018 as an initiative of the Interfaith Center on Corporate Responsibility (ICCR) to augment the number and capacity of global investors engaged on business and human rights concerns. The Investor Alliance’s work is centered on the UN Guiding Principles on Business and Human Rights, the same set of international principles that guides RDR’s work. ICCR’s genesis in the early 1970s came in response to Desmond Tutu’s call for religious and faith-based investors to divest from South Africa to pressure the government to abolish apartheid. Like her other colleagues in the responsible investment space, the Investor Alliance’s Director, Anita Dorett, found that, initially, most investors, and the businesses they engaged with, had a narrow view of human rights risk generally focused on supply chain concerns. The Investor Alliance’s decision to focus on human rights risks in the tech sector represented an important shift.

Meanwhile, RDR’s focus on digital rights and its alignment with the same international guiding principles made the two natural allies, Anita explained. In addition, she said, “we want to ensure all of our engagements are research-based and data-driven; comparative data is really important. So RDR was kind of an obvious choice for us.” RDR’s value-added was clear as soon as Anita started engaging with Founder Rebecca MacKinnon, who, she said, “poured her attention and her focus on investors utilizing the RDR data and really rallied around collaborating with us, understanding that the critical users of this data will be investors.”

Though, over the years, RDR continued to speak to investor needs—including through successive investor updates, it was the release of the Investor Alliance’s “Investor Statement on Corporate Accountability for Digital Rights” in 2021 that truly cemented RDR’s key role in helping to galvanize shareholder proposals around human rights concerns in the digital sphere. The statement, signed by 176 investors representing over US$9.2 trillion in investments, outlined the need for companies to adapt to “investor expectations in line with evaluations and recommendations of the 2020 Ranking Digital Rights Corporate Accountability Index,” in particular around privacy and freedom of expression. 

The Investor Alliance convenes and helps coordinate the collective work of a diverse group of investors. As its Director Anita explained, for this to be successful, “you need everybody on the same page sharing a common set of investor expectations.” Therefore, “the investor statement represents the articulation of investors’ expectations, based on the data RDR provides, and using RDR’s recommendations, with RDR’s expertise and analysis, to hold companies to account or to drive companies to fill in the gaps in their digital rights performance.” The decision by signatories to align their expectations around RDR’s work didn’t come as a surprise to Mercy Investments’s Kuykendal, who added that “familiarity and trust with RDR among the investor community made it easier for many to sign onto the statement.”

The statement also represented the culmination of growing investor interest in the potential digital risks presented by the tech sector. A first iteration of it, in 2019, garnered just under 50 signatories. But by 2021, interest in tech issues had increased significantly, Anita explained. During this time, shareholder proposals had been put forward for the first time at tech companies including Apple, Amazon, and Facebook (Meta), demanding everything from human rights policies to dedicated governance structures. And these helped to further grow awareness, even among investors who voted against them. Unsurprisingly, according to Anita, today “every time we speak to a new investor, they want to talk about tech.”

Before 2021, Lydia Kuykendal recalled that Mercy had done little in the tech space; most of their growing body of work in this space has indeed come through their affiliation with the Alliance. For NEI’s Michela Gregory, the statement has served as an important launchpad for the engagement and dialogues with companies that have come since. The Investor Alliance’s Digital Rights Engagement Initiative continues to coordinate outreach to RDR’s ranked companies, by the statement’s many signatories, which include NEI and Mercy Investments.

RDR Begins Supporting Key Proposals, Including at Meta

While RDR was first cited in a proxy resolution in 2020, we began directly supporting the crafting of such proposals in 2021. At Meta, for the second time running, shareholders recently voted on one of the most consequential RDR-supported proposals, calling for a human rights impact of the company’s targeted-advertising policies and practices. It has been one of the most successful in the company’s history, earning a strong majority of support from independent shareholders (those who are not founders/controlling shareholders). As we noted ahead of the vote on the original proposal in 2022, human rights impact assessments are essential for any company that is part of the “targeting ecosystem.” This is especially true of a company like Meta, which then accounted for more than a quarter of all U.S. digital ad spending.

The Meta proposal, which RDR helped prepare, was filed by Mercy Investments and co-filed by NEI Investments. “I don’t think I could have done it without RDR,” Lydia, who was the lead filer, explained. For her, a lot of RDR’s value-added has come from “tracking the legislation in the U.S., the EU, in Japan. I don’t have the capacity to do that. I don’t know other organizations that are particularly good at that.” She uses RDR’s data to track regulatory risks to companies like Meta for exempt solicitations—where shareholders are able to make a longer case for their resolution, and respond to company opposition—as well as to present these regulatory risks to investment giants like BlackRock and Vanguard, in the hope of attracting their large trove of investor votes.

Lydia recognizes the impact of the Meta proposal, which received “the second highest support apart from dual-class share voting.” She has noted, however, that, as long as multi-class share structures remain, “we’re never going to go anywhere.” These share structures give funders of companies inflated voting power at annual general meetings, and play a big role in limiting the success of human rights-based proposals. At Meta, CEO Mark Zuckerberg holds 61% of voting power, meaning he could single-handedly vote down any proposal. For this reason, Lydia is in “favor of investors really examining strategies to focus on a single issue, which is eliminating the dual-class share structure.”

And this is why RDR, alongside its support for individual proposals, has also been at the forefront of efforts to break down dual-class share structures. In 2022, RDR sent a letter to the U.S. Securities and Exchange Commission (SEC), signed by 20 other human and civil rights groups, urging an end to such structures, while pushing lawmakers to take action. Moving forward, RDR will continue to support shareholders in crafting proposals that put human rights at the forefront of company policy and practice while also advocating for governance structures that ensure investors are finally given a fair voice at the table.