RDR is now an independent initiative. Our website is catching up.  Read our announcement →

Over the past decade, Ranking Digital Rights has established a strong reputation for holding Big Tech and Telco Giant companies accountable for upholding human rights. Many bright minds have passed through RDR’s (figurative) doors, who have made important contributions to ensuring RDR would find the best way possible to push these companies to be transparent about their digital rights policies. All of this hard work has been reflected in the development, and evolution, of the RDR methodology, which allows us to accurately and effectively track company commitments and disclosures of policies meant to protect our most critical digital rights. This is the story of how the RDR standards came to be a decade ago, and how they’ve evolved since to best examine and reflect company policies, and the changes needed to protect our rights online.

Part 1: Where It All Started

2015 RDR Standards Indicators Overview

In 2013, after the launch of her critical book Consent of the Networked, RDR was founded by author and internet freedom activist Rebecca MacKinnon. The book was a powerful call to action, a way to shed light on how the convergence of unchecked government actions and unaccountable company practices were threatening the future of democracy and human rights globally. As a response to a dearth of existing research on these trends, MacKinnon proposed a project which would rank companies and educate stakeholders with three main goals in mind: examining corporate policy; identifying which companies could be considered industry “leaders,” if any, on digital rights, and which ones needed to catch up; and, finally, setting a roadmap for all companies to improve their policies and practices through concrete, measurable steps. Over time, our standards have changed to reflect both growing knowledge of how best to accurately capture disclosed company behavior as well as to account for the ever-evolving tech industry; moving forward, RDR’s methodology will continue to do so.

To produce a first iteration of RDR’s draft criteria, Rebecca launched a research and consultation process through a collaborative partnership with the University of Pennsylvania. It took two years after RDR first came into being before the launch of the first official RDR standards in 2015. These draft criterias identified three key issue areas, which became the three main pillars of RDR’s standards: The first revolves around the broad responsibility of businesses in the context of well-established international human rights standards. The remaining two focus specifically on businesses’ responsibilities toward two fundamental rights: the right to freedom of expression and information and the right to privacy.

In the earliest edition of the standards, the first of the three categories was known as “Commitment.” (This would later become “Governance” from 2017 on, while Freedom of Expression (F) and Privacy (P) have retained their names.) But it isn’t only the names that distinguish the 2015 methodology from today’s RDR standards. This first version of the standards featured far fewer indicators. This also meant fewer of what we refer to as  “elements” (a series of questions that help score each indicator), which also didn’t yet include a harmonized answers format. Some indicators, like C4, had a checklist for evaluation, meaning that a full credit is earned when all listed elements are checked off. Other indicators, like C1, had questions with yes/no answers, or with other pre-selected answers. A full explanation of the methodology can be found here.

Using this methodology, in November 2015, RDR launched its first official Corporate Accountability Index, evaluating eight tech companies and eight telcos. The scorecard received worldwide media attention, demonstrating global interest in corporate respect for users’ rights and the relevance of RDR’s work in ongoing discussions around digital rights issues.

Part 2: RDR Standards Evolve Toward Their Current Iteration

2017 RDR Standards Indicators Overview

Having survived the production and release of the inaugural Index, the team did not rest on its laurels but went right back to work revising the methodology and data collection process based on what we had learned through this experience. In July 2016, a draft revision (edited version) was published for public consultation. Stakeholders from civil society, academia, the investor community, and the companies themselves provided feedback. The RDR team then incorporated this feedback, received across two phases of consultations, in order to create a finalized 2017 RDR Index methodology.

Some substantial changes were made to this new version. It was at this point that the “Commitment” category was changed to “Governance,” in order to more accurately reflect that these indicators and their elements go beyond seeking a mere commitment. They ask for companies to demonstrate broad governance and oversight mechanisms to ensure they are able to fulfill their commitments to freedom of expression and privacy. The indicators under the “Freedom of Expression” category were expanded and reordered. For example, an indicator about network shutdowns was created in order to better capture how companies were dealing with what RDR noticed was a growing threat to freedom of expression. Indicators in the “Privacy” section were also reorganized and reframed. In addition, an indicator on data breaches was added, while several indicators related to security standards were revised.

All indicators were refined to use a standardized scoring format, making the process of data collection and scores calculation more straightforward. As we detailed at the time, the indicators were reworked so that they were framed as normative statements (“The company should…”), while elements became questions (“Does the company…?”). This meant that the indicators stated our expected standards more explicitly, while the elements measured whether companies meet those standards:

  • Full disclosure = 100
  • Partial = 50
  • No disclosure found = 0
  • No = 0
  • N/A

Additional research and analysis in 2016 concluded that, given that people around the world access the Internet primarily, or even exclusively, through smartphones, the Index should include companies that produce mobile software and devices. As a result, mobile ecosystem services were added, which also meant the addition of companies like Apple and Samsung that primarily manufacture devices and hardware. With these changes complete, a new Corporate Accountability Index was launched in March 2017, evaluating 22 companies using 35 indicators.

For the next scorecard, launched in April 2018, the standards remained the same. In 2019, some minor changes were made to two indicators, but not enough to speak of a wholly “new” iteration of the standards.

Part 3: RDR Takes On the Business Model

2020 RDR Standards Overview (indicator families have different shapes based on their number of sub-indicators)

In 2019, big changes were afoot once again, as Ranking Digital Rights began to hone in on a crucial missing element in the web of complex power dynamics between government and Big Tech that was wreaking havoc on our information ecosystems. It had become clear that the problems caused by Big Tech did not stem solely from what might amount to improper security or company negligence. Instead, companies’ very deliberate use of surveillance-based business models was directly threatening democracies and users’ rights. One year later, RDR launched the “It’s The Business Model” report series, as well as a new and revised version of the RDR standards. This new iteration included indicators meant to hold companies accountable on two key issues directly related to this business model: targeted advertising and algorithmic content governance systems.

During the first half of 2019, RDR research staff conducted extensive desk research and gathered feedback from more than 90 expert stakeholders. During this process, a consensus was reached around the functioning of the surveillance-industry business model. As a result, in October 2019, RDR published draft indicators on both targeted advertising and algorithmic systems.

RDR published a draft version of the 2020 RDR Index methodology (redline version) in April of that year, which integrated work across the three main categories. This draft was validated by a final round of public consultation, resulting in the final 2020 RDR standards. One of the main challenges of this round of additions was the sheer volume of new indicators that were being added. The number of evaluation criterias evolved from 35 indicators in 2019 to 58 in 2020.

This 58% increase could easily have made it impossible to conduct year-on-year comparisons of how company policies had improved or worsened. But the team came up with a solution: introducing “families” of indicators, groups of indicators that apply to similar issue areas. For example, the G6 indicator—which evaluates whether companies provide clear and predictable remedy when users feel their rights has been violated—was divided out into a family of two indicators: G6a is the same as indicator G6 from the 2019 RDR Index, while G6b is a new indicator that applies standards for how platforms should handle content moderation appeals. This approach made it possible to integrate new indicators addressing company targeted advertising and algorithmic systems without having to renumber existing indicators, and thus imperil our ability to continue making comparisons through time.

Part 4: Facing New and Old Challenges—RDR’s Methodology Today

The final important change came in 2022, this time focused on the way RDR’s results were presented. For the first time, the Index was launched in two parts: The Big Tech Scorecard and the Telco Giants Scorecard, using the same standards as in 2020. RDR was motivated both by external feedback that the 26-company Index was too dense for the audiences to engage with all at once, and by the significant resource challenges involved in producing a report of this scope every year. Splitting the Index into two Scorecards allowed RDR to design a more manageable process for data collection and analysis, and to be more thoughtful about how results were presented about these two important and different sectors.

Today, the RDR methodology is facing new challenges, as well as some unresolved ones from its past. It’s always been hard to measure something as complex as corporate commitments to digital rights, and as a result, the RDR standards are not easy to quickly grasp. The organic evolution of the indicators, as described above, means a large number of indicators that have been added over time, which can be hard to digest, particularly since the way in which companies’ final scores are calculated is not always intuitive. And a total of 58 different criterias means a pronounced learning curve for researchers using the standards for the first time.

Finally, it’s important to note that this methodology has not only been used throughout the years by the team itself; RDR’s research process has always been open source, and its standards have remained available to be adopted by other organizations and experts. This transparency in the development of these standards made it possible for global digital rights organizations to begin creating the first adaptations of the RDR methodology in Pakistan, India, Kenya, Senegal, and the Arab States, beginning in 2016 through 2018.

In the years since, RDR has continued working closely with research and advocacy partners globally who wish to employ the open RDR Index methodology to add to the growing number of adaptations. In 2021, RDR began a collaboration with the Greater Internet Freedom Project (GIF) at Internews to mentor and help regional and local partners by transferring technical expertise to hold tech and telecom companies accountable for protecting internet freedom. As a result of this and other collaborations, more than 200 companies have been evaluated using the RDR standards across 46 countries. Though this global work has brought great rewards and reams of new data, it is not without its unique challenges.

Some partners have noted that certain indicators are less relevant for smaller companies than for those large giants evaluated by the flagship RDR indexes. For example, small- and medium-sized companies that are not traded on a public stock exchange may not have a Board of Directors or other governance structures that are central to the G indicators. Thus, RDR is continuing to work to find the right balance between simple and well-defined standards, which are easy to understand, and ensuring sufficient flexibility to adapt to different local contexts.

Another challenge RDR currently faces: determining how all the additional information and data being created by an increasing number of local partners can then be used for meta comparison and analysis, particularly when they must often modify the methodology to carry out their individual research projects. One of the key findings of the latest TGS was a seemingly consistent disconnect between the policies observed at company headquarters compared to those of their various subsidiaries, which were, in many instances, in countries with a more volatile environment for human rights. There’s still more work to do to analyze this data from partners and paint a clearer picture of any notable pattern of discrepancies.

Part 5: Conclusion—RDR Looks to the Future

A new iteration of RDR standards is coming, stay tuned!

As RDR moves into its second decade, it’s important that its standards serve not only the needs of the core team itself and its flagship indexes. RDR hopes that its work can enrich the field of corporate accountability in the tech sector as a whole, creating a space where other organizations bloom, including in the Majority World.

Finally, RDR has always recognized the constantly evolving nature of the tech field. As a consequence, human rights are affected in the digital sphere in new ways, making revisions of the standards necessary to acknowledge these new challenges. The growing and novel uses of generative AI is one of the biggest and most imposing challenges our field has faced in a long time. As such, RDR is developing a set of preliminary standards for generative AI, which presents many potential new risks, especially given the unprecedented speed at which it’s being rolled out and adopted by almost all tech companies. A new Generative AI Accountability Scorecard, measuring generative AI services is forthcoming. We currently have an ongoing call for consultations on our generative AI standards, which will remain open until September 10.

Meanwhile, the RDR team is working on a new, broader revision of its methodology that will take into account all of these challenges. With the future of tech accountability facing perhaps more uncertainties than ever, this won’t be an easy endeavor. But there are also many exciting, positive updates in store for the next generation of RDR standards, particularly as our global work continues to expand. And, as always, the entirety of our upcoming development process will be open and collective. This means that all of our allies and stakeholders will be able to follow along on this next chapter in the ever–changing story of the RDR methodology.

Acknowledgements: This article was written by Augusto Mathurin with contributions from Sophia Crabbe-Field and Nathalie Maréchal.


In 2022, for the first time, Ranking Digital Rights divided its flagship Corporate Accountability Index—which, since 2015, has evaluated the world’s most powerful digital platforms and telcos on their respect for human rights—into two, becoming the Big Tech Scorecard (BTS) and the Telco Giants Scorecard (TGS). This allowed RDR to place a particular, and much-needed, spotlight on the role of telcos in either protecting or enabling our digital rights.

As we’ve noted, digital platforms often receive far greater attention, including from the media. But telcos are just as likely to perpetuate harmful violations of user rights. And in many parts of the world, they also wield vastly more power. This stems from both their much closer relationship with governments as well as from being the primary means of accessing the internet. In other words, telcos are veritable gatekeepers of the web. Their power is especially notable when they are government-owned and/or operate in authoritarian or authoritarian-leaning countries.

One of the most severe manifestations of how they can wield their power to curtail rights is through the imposition of network shutdowns. Other potential violations include receiving, and responding to, requests for censorship, as well as handing over data to authorities or other third parties that is more detailed than the data obtained by platforms. This may include users’ communications as well as demographic, location, and billing data. In fact, the ability to acquire this data has given telcos a newfound interest and ability to carry out potentially harmful targeted advertisement. Despite these potential risks, and higher susceptibility to government demands, our inaugural Telco Giants Scorecard (TGS) found that telcos are, on a whole, less transparent, earning lower scores than their Big Tech peers.

Despite Laggards, Some Telcos Continue to Improve on Human Rights

But in spite of overall disappointing scores for many companies that have stuck to the status quo, three companies RDR evaluates, and has engaged with, have continued to improve. These include Spain-based Telefónica, which earned the top score at 57% in our 2022 TGS. Though it has led the pack since 2019, this grade was up from 49% during the last Corporate Accountability Index in 2020. It achieved this new grade due, in large part, to expanded human rights risks assessments and new disclosures sharing that it does not comply with private requests for censorship or user information.

Meanwhile, two companies headquartered in the Global South, South Africa-based MTN and América Móvil, headquartered in Mexico, increased their scores from 23% and 22% in 2020 to 34% and 32%, respectively, in 2022. These increases allowed them to leap ahead of Orange, with MTN becoming the first emerging market company to make it into our top six. Both companies cited RDR’s standards as pivotal, inspiring new transparency reporting that they both conducted for the first time, becoming the first companies to do so in Africa and Latin America. These reports offered explanations of processes for managing government demands for shutdowns, censorship, and demands for user information, all critical areas for transparency from the telecom sector.

How RDR Engages With These Telcos to Bolster Digital Rights Globally

To better understand how RDR’s direct engagement with companies has helped compel positive changes in company policy, RDR spoke directly with representatives from the three telco giants. Carlo Manuel Drauth is Head of Responsible Business and Human Rights at Telefónica, which sits atop our rankings. For those who, like him, operate within the sustainability department of a large telecom multinational, RDR’s evaluations and recommendations are a critical piece of data that he can point to when communicating on the need for human rights-based policy changes. “It always helps if you have an internationally accepted benchmark on how to manage potential digital rights impacts and you can show internally that these issues are material to the broader stakeholder community,’” he explains. It helps that many companies know and understand that RDR represents a range of stakeholder interests, from civil society to investors.

Two advantages of RDR’s work appear to particularly stand out to companies’ human rights representatives: a deeply rigorous methodology, combined with an easy channel for communication. América Móvil explained that working with RDR, and examining the company’s scores along each indicator, has helped them to better communicate internal policies and best practices to the public. Carlo Drauth at Telefónica agrees. Though Telefónica is working with numerous other indices and analyses, he believes that RDR’s “scientific rigor is unparalleled” and provides an evidence-based tool for stakeholder interactions on digital rights.

América Móvil points as well to the company engagement RDR carries out after our evaluation, including RDR’s “fluid communication channel,” the chance to review pre-scoring, and the insights RDR subsequently provides about potential avenues for improvement through our follow-up process. América Móvil, like many companies, appreciates RDR’s “race to the top” approach. As they explain, “it helps that you can see your peers’ ratings through time, but you can also see your peers’ ratings throughout time and by topic, facilitating the identification of priorities and areas of opportunity. Being able to see when peers get a jump in their score also helps push for internal shifts which fortunately have been reflected on the company score.” Ncumisa Willie, Senior Manager for Digital Human Rights at MTN—the company whose score rose the most in the 2022 Telco Giants Scorecard—agrees. “It’s great when you see that you are improving and you see your scores. It motivates you to go back and realize that actually you want to do more and more,” she explains.

Indeed, as mentioned, pressure on human rights from RDR has translated into measurable improvements in policy and processes. RDR provided impetus for MTN’s decision to conduct new human rights impact assessments. Meanwhile, América Móvil points directly to RDR as a major inspiration for the company’s first-of-its-kind transparency report. Carlo at Telefónica highlights the company’s principles on artificial intelligence, the first of its kind when it was published in 2018, which was previously brought up by RDR. Though these principles existed internally, RDR provided a main impetus and encouragement to develop a governance structure for implementation. RDR’s indicators also provided a blueprint for the development of Telefónica’s Global Transparency Center, where customers and other stakeholders can find information on policies related to privacy, security, and freedom of expression. For Carlo at Telefónica, RDR’s Index functions as a “sort of north star,” either confirming that the company is headed in the right direction, or else acting as a “corrective measure” when they aren’t.

At RDR, we believe that, though our standards for rights-respecting policies and transparency may be high, they’re also achievable. Furthermore, they are necessary if we are to protect human rights, while new risks and harms emerge as novel technologies are rolled out. They are also essential for telcos, whose ability to help governments usurp the rights of those living under difficult socio-political conditions, including authoritarianism, has, for too long, been underestimated. Though each and every telco we rank, including these three, has a long way to go to better protect human rights, Telefónica, América Móvil, and MTN have demonstrated the strides, and score improvements, that are nonetheless possible when companies take seriously both the risks and the ensuing responsibilities that our standards highlight.


With the release of her pivotal Consent of the Networked in 2012, RDR Founder Rebecca MacKinnon issued a call to action for civil society to defend our digital rights from companies with the same rigor with which we’ve previously fought for our rights before governments. In 2013, Rebecca answered the call directly by launching Ranking Digital Rights based on the idea that there needed to be a standard against which companies would be measured for adherence to human rights.

Whereas RDR first encountered strong resistance from companies, today, through sustained engagement, it has helped create a “race to the top,” propelling Big Tech and Telco giants toward improved policies that better protect human rights in the digital sphere, from stronger privacy protections to companies’ first-ever transparency reports. Since 2020, RDR’s data has been cited with growing frequency in increasingly common shareholder proposals, at companies from Meta to Alphabet, put forward by the responsible investor community to hold the tech sector accountable.

While RDR has seen its influence grow among companies and investors, it has also been employed with increasing frequency to hold smaller tech companies, as well as subsidiaries of the companies evaluated by our Scorecards, accountable to marginalized populations as well as to citizens living in complex socio-political conditions, from Iran to Russia, from Southeast Europe to South and Central Asia, and by LGBTQ advocacy organization GLAAD in the U.S.

To mark a decade of milestones, you can now browse our Decade of Tech Accountability in Action page. Here’s what you’ll find:

Take a look back at our top 5 accomplishments over 10 years: With over a decade of work, there’s a lot to choose from. So what are we at RDR most proud of? See what made the cut among our top 5 accomplishments over 10 years.

RDR Looks to the Future: To mark this important occasion, RDR held its first-ever in-person retreat this spring. Check out our strategic priorities as we step into our second decade.

A Conversation with RDR Funder Rebecca MacKinnon: From RDR’s early days to her proudest RDR moments to her current work with the Wikimedia Foundation, check out this can’t-miss conversation between Rebecca and RDR’s Research Manager Zak Rogoff.

Testimonials: RDR has worked with everyone from global civil society to investors to companies to achieve our priorities. Take a look at what our partners and stakeholders have to say about working alongside RDR.

The RDR Timeline: Take a look back at some of our most pivotal moments.

Deep Dive into 10 Years of Impact: Hear first-hand from some of the partners and actors who have had the most lasting impact on our work, and on the digital rights field at large. Check out the following 5 stories to discover how RDR works with global civil society partners, companies, policymakers, and investors to push for change:

If you’re interested in where RDR is headed, check out our call for consultation on our upcoming standards for generative AI →


RDR Issues Call for Consultation on Generative AI Accountability Standards

In the fall of 2023, Ranking Digital Rights will release the inaugural Generative AI Accountability Scorecard, a report card roadmap for consumer-facing generative AI services to respect the human rights to privacy, non-discrimination, freedom of expression, and freedom of information. The Scorecard will rely on this set of draft indicators as a basis for scoring.

For more information on this project, including the rationale for its creation, please check out our June 2023 report published alongside RDR’s set of preliminary standards on generative AI, which formed the basis for these draft indicators.

Before we transform this draft into final indicators and conduct our evaluation, we’re looking for input from civil society experts and partners. If you’re interested in providing feedback, please email us at methodology@rankingdigitalrights.org.

You may also contact our Research Manager Zak Rogoff at rogoff@rankingdigitalrights.org.

For more info, please check out our full draft indicators.


Among Censorship and Shutdowns, RDR’s Methodology Is Spotlighting the Need for Transparency in Central Asia

In January 2022, protests broke out in Kazakhstan due to rising gas prices. Protestors, who the government accused of attempting a coup, were met with violent repression from law enforcement authorities and armed groups. Access to the internet was also quickly restricted, as it had been previously in both 2021 and 2020. A similar tendency toward restricting online freedoms has been noted in Uzbekistan and Kyrgyzstan.

It is within this context that RDR supported local Central Asian digital rights organization the Public Fund Civil Internet Policy Initiative (CIPI) in researching whether and how tech companies—including telcos, e-commerce platforms, and fintechs—in Central Asia have committed to respecting human rights and protecting their users.

Though their research points to several gaps in company policies, perhaps the most glaring omission observed is their failure to provide transparency on policies for responding to government requests for internet shutdowns, despite the imminent threat these present to human rights across the region. Secondly, CIPI’s research further highlights a trend RDR has previously reported on: the tendency of large telecommunication companies, often based in Western Europe, to provide fewer human rights protections to the users of their subsidiaries abroad. This is a gap that must immediately be bridged.

Read more about the impact of these companies on human rights in Central Asia, and how CIPI is working to improve human rights policies in the region. →

For even more info, check out the full report. →


Is Momentum on Tech Shareholder Activism Stalling? How to Reinvigorate It in 2024

Though support for investor proposals at annual AGMs (annual general meetings) was down overall this year, the overall number of human rights-based proposals was up – by 14% since 2020. This is just one of many emerging trends that those in the responsible investor community should take note of as we work to improve support during next year’s round of meetings.

In a recent post, we highlighted a number of patterns that may have impacted support in 2023. These include:

  • Ever-present barriers like the dual-class share systems at companies like Meta and Alphabet continue to give founders extremely inflated voting power and play a role in limiting proposals’ success.
  • After last year’s period of uncertainty in the tech sector, shareholders may also be refocusing their attention on proposals where risks can be more clearly linked to companies’ business outcomes and materiality.
  • While the number of overall proposals has increased, the number of specifically anti-ESG proposals has skyrocketed. Where they were almost nonexistent before 2020, over 50 were filed in 2023. In some cases, proposals put forward by “anti-ESG” groups may have created confusion by employing similar language to those that call for politically agnostic disclosures on human rights issues.
  • Large institutional investors often rely on analysis from ESG ratings agencies to inform voting decisions. As we discussed in a recent piece, these ratings often lack the grounding in international human rights standards that the RDR Corporate Accountability Index and other human rights benchmarks offer.

In the coming weeks, we’ll be joining forces with the Investor Alliance for Human Rights’s Anita Dorett to take a deeper look into some of the results at this year’s meetings and share how investors and our partners can learn from the trends of the past two years to create stronger and more effective shareholder advocacy at Big Tech companies in 2024.

Read more on this and on the specifics of RDR-supported proposals at Meta, Alphabet (Google), and Amazon. →


RDR Media Hits

NPR: RDR’s Research Manager Zak Rogoff spoke to NPR about shareholder advocacy ahead of Big Tech annual general meetings.

Read More at NPR

 


Support Ranking Digital Rights!

If you’re reading this, you probably know all too well how tech companies wield unprecedented power in the digital age. RDR helps hold them accountable for their obligations to protect and respect their users’ rights.

As a nonprofit initiative that receives no corporate funding, we need your support. Do your part to help keep tech power in check and make a donation. Thank you!

Donate

 

Subscribe to get your own copy.


Since 2018, organizations across various socio-political contexts have adapted RDR’s methodology to hold platforms and other ICT services accountable. What do they each have in common? Protecting the rights of some of the globe’s most marginalized and discriminated against populations. Two of the most successful and notable cases of adaptations include: measuring how well social media companies protect LGBTQ rights online and evaluating messaging apps under conditions of state repression in Iran.

The former was conducted by American LGBTQ media advocacy organization GLAAD to create their now yearly Social Media Safety Index (SMSI), which includes a Platform Scorecard evaluating five top platforms (Facebook, Instagram, Twitter, YouTube, and TikTok) on how well they protect LGBTQ users. Some of the main issues identified by Jenni Olson, the Senior Director of the Social Media Safety Program, as harming LGBTQ users include: inadequate content moderation and enforcement, harmful algorithms, and a lack of transparency. The SMSI helps shed light on how online hate speech, as well as misinformation and conspiracy theories, toward LGBTQ people is able to spread and manifest online unchecked. Each year, the SMSI has made headlines. But the 2023 SMSI stirred the most controversy for the large drop in score experience by Twitter. The company, now under the helm of Elon Musk, fell 12 points (all other platforms saw their score increase) and became the most dangerous company for LGBTQ people this year.

The second adaptation was used to create a report called “Digital Rights & Technology Sector Accountability in Iran” through a joint collaboration between Filterwatch and Taraaz, examining both local and international messaging apps. When Roya Pakzad, Taraaz’s Founder and Director, last spoke to us about her work on the report, protests were ongoing over the death of 22-year-old Mahsa Amini, who died in police custody after being arrested over the “improper” wearing of her hijab. The mass mobilization that followed was sparked over social media and was met with internet shutdowns and outages. Meanwhile, the government was being accused of pushing through the “draconian” Internet User Protection Bill, which may vastly curtail what Iranians could access on the web.

These two conversations with Jenni and Roya were first published in 2022, as part of our interview series, Digital Rights Dialogues. Among other things, we spoke to them about why they chose to use the RDR methodology, how they adapted it, and how it is helping them achieve their goals:

Jenni Olson: We had been thinking about doing a scorecard and trying to decide how to go about that. We knew that we wanted to lean on someone with greater expertise. We looked to Ranking Digital Rights as an organization that is so well respected in the field. We wanted to do things in a rigorous way. We connected with RDR and you guys were so generous and amenable about partnering. RDR then connected us with Goodwin Simon Strategic Research, with Andrea Hackl [a former research analyst with RDR] as the lead research analyst for the project. That was such an amazing process and, yes, a lot of work. With Andrea, we went about developing the 12 unique LGBT-specific indicators and then Andrea attended some meetings with leaders at the intersection of LGBT, tech, and platform accountability and honed those indicators a little more and then dug into the research. For our purposes, the scorecard seemed like a really powerful way to illustrate the issues with the platforms and have them measured in a quantifiable way.

Though it’s called the “Social Media Safety Index,” we’re looking not only at safety, but also at privacy and freedom of expression. We developed our indicators by looking at a couple of buckets. The first being hate and harassment policies: Are LGBTQ users protected from hate and harassment? The second area was around privacy, including data privacy. What user controls around data privacy are in place? How are we being targeted with advertising or algorithms? Then the last bucket would be self-expression in terms of how we are, at times, disproportionately censored. Finally, there is also an indicator around user pronouns: Is there a unique pronoun field? Due to lack of transparency, we can’t objectively measure enforcement.

What we end up hearing the most about is hate speech, but it’s important to note that LGBTQ people are also disproportionately impacted by censorship. We’re not telling the platforms to take everything down. We’re simply asking them to enforce the rules they already have in place to protect LGBTQ people from hate. 

I’m not naïve enough to believe that the companies are just going to read our recommendations and say “Oh wow, thank you, we had no idea, we’ll get right on that, problem solved, we’re all going home.” This kind of work is what GLAAD has done since 1985: create public awareness and public pressure and maintain this public awareness and call attention to how these companies need to do better.

There are times when it feels so bad and feels so despairing like, “Oh, we had this little tiny victory but everything else feels like such a disaster.” But then I remind myself: This is why this work is so important. We do have small achievements and we have to imagine what it would be like, how much worse things would be, if we weren’t doing the work. I’m not naïve that this is going to create solutions in simple ways. It is a multifaceted strategy and, as I mentioned a minute ago, it is also really important that we’re working in coalition with so many other civil society groups, including with Ranking Digital Rights. It’s about creating visibility, creating accountability, and creating tools and data out of this that other organizations and entities can use. A lot of people have said, “We’re using your report, it’s valuable to our work.”

Roya Pakzad: For a long time now, the Iranian digital rights ecosystem has been Iranian people resisting government censorship and the Iranian government trying to censor the internet. If you read literature from 2008 until 2016, you see that civil society wasn’t really focusing on the role of companies in their digital rights advocacy. The focus [of the report] was mainly on government censorship. So we wanted to say, “Oh no, there are so many actors in the middle, and we have to focus on them because they have a responsibility too.” Part of that was just introducing the idea of corporate social responsibilities.

We wanted to introduce GNI (Global Network Initiative), a non-governmental organization that assists companies in respecting freedom of expression and privacy rights when faced with pressure from governments to hand over user data or remove or restrict content, into the conversation. We wanted to introduce multi-stakeholder engagement. We wanted to introduce human rights impact assessments. We wanted to introduce Ranking Digital Rights’s great index and show that you can use that for evaluating yourself as a company, or journalists can use it to evaluate you. That’s why we didn’t just pick certain indicators [from Ranking Digital Rights’s methodology, we used all of them, because the main purpose was an educational approach with the idea of business and human rights, introducing all of the ideas of human rights due diligence and human rights impact assessment policies.

We did have to adapt for the context of Iran and its current lack of discussion about business and human rights. And the other thing we noted is e-government services being an add-on to other services. The government incentive to use Iranian messaging apps also means you pay less than you do to use Telegram or WhatsApp, because the data that you pay for costs less than data to access foreign apps. If you don’t have enough money to pay for VPNs, it means that you can only use Iranian apps, which penalizes people because of their socio-economic status, as the government changes the tariff for data, for example. In the context of Iran, we had to pay attention to the narrative that we use and explain why we are using privacy and freedom of expression indicators and mixing them with a discussion of the socio-economic context.

[A colleague] and I also recently worked with the Iran Academia’s MOOCs program to record a lecture based on our RDR report. We have seen a lot of attention directed at the role of technology companies and technologists in digital rights in Iran. The gap that we saw back in 2017, with regards to the lack of attention to the private sector, has been shrinking dramatically in just a year. We have seen so much mobilizing, dialogue, and resistance from the tech ecosystem in Iran against government policy, like tech companies putting up banners on their websites publicly announcing their objection to the [Internet User Protection bill]. There have also been cases of naming and shaming public-private partnerships and contracts.

Companies have told us, informally and through back channels, that they are interested in using the workbook [we produced with the report] to revise their policies and update them. A non-ranked company even asked me to give a talk in their forums and for their employees (which I decided not to do, because I was worried about getting them in trouble). We have seen ICT journalists inside the country using approaches from the RDR Index to compare company policies.

Company engagement is something that we learned a lot from. The companies that we evaluated completely ignored us, to be honest with you. Sometimes we saw that some people from the evaluated company added us on LinkedIn. So we knew that they read the report, but they didn’t engage, even though we contacted them over email, we sent Twitter messages, we sent LinkedIn messages.

But non-evaluated companies, such as marketplace apps, said, “Oh we want to update our policies and we will use the workbook.” Because they were not evaluated they were like, ‘Okay, we are safe.’ They interacted with us and with journalists and students in tech policy; they were interested.

If you are interested in adapting the RDR methodology to achieve your corporate accountability goals in the tech sector, please get in touch at partnerships@rankingdigitalrights.org or visit our Research Lab for more information.

Zak Rogoff: Can you start by giving me your observations of what RDR is like now compared to the beginning, when you founded it?

Rebecca MacKinnon: In 2013, it was just an idea getting off the ground. We didn’t actually have an index of any kind until 2015. And in 2013, I was the only full-time employee. In the first half of 2013, I had a collaborative partnership going on with the University of Pennsylvania and a bit of funding to support some interns and fellows, and Allon Bar who I hired on contract in late 2013. So it was a very shoestring operation. We kind of cobbled it together with band-aids and paper clips and scotch tape.

We just had the idea that there needed to be a standard against which companies should be evaluated for respecting human rights. It needed to be something that made sense to investors. It needed to be something that resembled rankings and ratings of companies on other issues and in other industries. It needed to learn from what others had done in terms of what was effective and what was not.

We didn’t have the funds to actually produce the ranking for the first couple years. So we took our time doing a lot of research and consultation and producing iterative drafts of criteria before it actually became a methodology. So, in the summer of 2013, I was working with Tim Libert and Hae-in Lim, some students, and also with some researchers who were funded by Internews, to just take some draft criteria and test them out by looking at companies in some different countries and regions to figure out which criteria even were measurable or made any sense to evaluate, and which types of criteria didn’t work. Just to get some basic understanding of what made sense to include if the purpose was to incentivize company improvement and not just create sensational reports about outrageous things.

We learned a lot, but we also faced some resistance.  A number of companies now have pointed to their results in the Index, and pointed to their improvement and to RDR’s work as being useful and constructive. But in 2013, some of the same companies were not happy about the idea of a public ranking when they learned about it, and tried to convince me it was not a good idea. 

ZR: That’s a great story. And now they’re using it.

RM: At the time, a number of companies found this idea to be quite threatening. And today some of those same companies are using the Index and the methodology internally and, at least privately, if not publicly, acknowledge that is very helpful. 

ZR: Usually, when I explain to people why I think what we do makes a difference, the thing that I start with is that there are companies that actually talk to us and they say, “Look, we made this change because you asked us to do it.” It’s satisfying to know that some of those who use it now felt differently back then because obviously, as you know, there are still companies that won’t give us the time of day. But clearly that can change.

RM: Yeah, I mean I really knew that we were succeeding when a company that wasn’t in the Index approached me and asked if they could be in the Index. We only had so many resources for so many companies that we had to prioritize. We just weren’t able to include them. 

ZR: Tell me, what was your proudest moment working at RDR? 

RM: Well, there were a lot of proud moments so it’s hard to pick one. When investors started citing our data in shareholder resolutions, that was a very proud moment. When we saw Apple actually making changes in response to the shareholder resolution that cited our data, that was a very proud moment.

Another very proud moment was seeing civil society and research groups around the world using the methodology to hold industry in their regions accountable. When SMEX, in the Middle East/North Africa region, applied our methodology to telcos in that region and showed just how little was being disclosed and just how poor the policies were, that was a very proud moment. It showed that the methodology can be used in a lot of different ways. It’s not just about people in a privileged country applying criteria to people and to companies in other parts of the world. But it’s people in their own regions able to use these criteria to empower themselves to advocate with companies in their region to protect their rights better. That was a really proud moment, when we were able to see how the methodology was being picked up by people in lots of different parts of the world.

ZR: I feel like that’s one of the parts of the work that’s grown the most exponentially. I can’t even keep track of how many pots we have in the fire with different people doing adaptations of our methodology. 

RM: What’s made me most proud, especially since I’ve left and seen more adaptations coming out, is just knowing that this methodology and the work we did together – the work you all have done together since –-that impact we’ve made lives on no matter what. The methodology, the way of thinking about human rights, digital rights, and corporate accountability – RDR has left an indelible mark that’s going to continue to evolve through all kinds of research. It’s had a huge impact on how investors think about these issues. And, so no matter how RDR evolves from here, it’s going to live on in really interesting ways. The impact is going to continue to spread in ways that will be hard to predict.

And one thing that I really like is that it’s not centralized. We don’t control the methodology, we don’t control how it’s used. On one hand that might seem scary, because who knows who might do what in what strange way. But on the other hand, the fact that we don’t control it in a centralized fashion and that we’re not trying to control the IP, we’re not trying to license the use of the methodology, means that it can’t die. People are free to build off of it in different directions, in ways that empower different groups of people to use the leverage that they have available, to bring about change by companies. 

ZR: My last question on this topic is, what do you think is unique about it to this day? Obviously, when you started it, it was completely unique. There was nothing like it. But there are more people looking at these issues than there were back then. So what do you see as unique about it, especially now that you have some distance from it? 

RM: I guess one thing that’s unique about it, is that it represents the input of a lot of people from many different fields, from many different parts of the world. We were not just a group of experts who sat down and figured out a methodology. We had a few ideas, but we workshopped them with people from a range of different countries and regions with a range of different types of expertise. We then took that and workshopped it again, talked to companies about it and got a lot of company input in addition to input from other stakeholders, then revised it again. We did a test run, then improved it again for the first Index, then learned the lessons from the first Index, and improved it again.

And so because of its iterative and consultative nature, it doesn’t constitute any one person’s ideas or agendas. It’s a high bar, but it’s a reasonable bar. It’s achievable and it reflects a broad consensus of what a rights-respecting company ought to be doing.

ZR: Tell me what you’re working on right now. I know you’ve been doing a lot of Section 230 related work?

RM: So at the Wikimedia Foundation, the job of my team, who are responsible for public policy advocacy, is to advocate for laws and regulations and government behaviors that make it possible for people to edit Wikipedia – no matter who they are and where they are, without fear of being threatened or censored or sued and so on. In the U.S. Section 230 is an existential thing for us because, not only does it protect us from liability for what people post on Wikipedia, but it protects the right of volunteer editors to actually enforce rules that are context-appropriate without being sued and without us being sued. And so the entire model depends on Section 230. 

Whether any reforms to Section 230 would be okay, you’d really have to red-team it, and in great detail, to see what unintended consequences could arise for Wikipedia or other public interest platforms that do not rely on targeted advertising business models. Most Section 230 reform proposals focus on the large commercial platforms, and their authors aren’t thinking about the implications for non-profit or decentralized, community-run platforms like ours.

ZR: You can’t say, “Oh, it needs to stay exactly the same,” but it also ultimately has to be a tested, well considered change if it is changed.

RM: Exactly. So people ask “Well, could you accept any reform?” And It’s hard to say in general terms. 

Wikipedia is now subject to the EU’s Digital Services Act as a Very Large Online Platform, which means that the Wikimedia Foundation (as Wikipedia’s technical and legal host) is required to  do risk assessment and transparency reporting. Gee, sounds familiar (these are key parts of RDR’s standards). The DSA also requires grievance and remedy mechanisms. Again, sounds familiar. I think there could have been some RDR influence on what the DSA requires.  We were already doing these things so it’s mainly about strengthening our practices and making sure we are communicating them clearly, and addressing risks in the ways European regulators require.

Another important thing about the DSA is that we (Wikimedia) were in dialogue with European lawmakers throughout the drafting process. The final scope of the law took Wikimedia’s volunteer-run content moderation model into account: it only applies to content moderation rules set and enforced by the platform operators, not volunteer communities.

But I’m not convinced that U.S. lawmakers should consider DSA-style requirements  in the context of Section 230. . Of course, having a privacy law would help deal with a bunch of the issues that people are trying to solve with Section 230. (RDR’s analysis of targeted advertising and algorithmic systems reached a similar conclusion in 2020.) So we are saying to Congress: why don’t you all try the other things first before you mess with Section 230. But mainly, we’re just asking, “If you do want to revise Section 230 in a way that doesn’t hurt Wikipedia, then you need to bring us into the room when you’re drafting anything.”

ZR: I always use Wikipedia as the first thing I refer to when people ask: Why shouldn’t we change Section 230 or why shouldn’t we break it? It’s not a niche thing, everybody uses it. 

Well, this was great. Thank you and it was good to talk to you.

RM: Well, congratulations and I look forward to following what happens next.