RDR is now an independent initiative. Our website is catching up.  Read our announcement →


Microsoft has unseated Google at the top of the 2019 RDR Corporate Accountability Index. Telefónica outpaced Vodafone among telecommunications companies. Yet despite progress, most companies still leave users in the dark about key policies and practices affecting privacy and freedom of expression, according to the 2019 Ranking Digital Rights Corporate Accountability Index, released today.

The 2019 RDR Index evaluated 24 of the world’s most powerful internet, mobile ecosystem, and telecommunications companies on their disclosed commitments, policies, and practices affecting users’ freedom of expression and privacy, including governance and oversight mechanisms. Research showed that in the past year a majority of companies improved and clarified policies affecting users’ privacya trend that appears to be driven by new data protection regulations in the EU and elsewhere. But even the leading companies fell short in key areas. Few scored higher than 50 percent, failing to even meet basic transparency standards, leaving users across the globe in the dark about how their personal information is collected and protected—and even profited from.

Companies evaluated by the 2019 RDR Index collectively provide products and services used by more than half of the world’s 4.3 billion internet users, thus providing a snapshot of the extent to which users’ rights are protected and respected across the globe. The RDR Index methodology sets minimum standards for what companies should disclose about their rules and processes for enforcing them, data privacy and security policies and practices, and how they handle government demands to remove or block content, to shut down internet services, or to access user information and communications.

Company highlights

  • Microsoft ranked first, due to strong governance and consistent application of its policies across all services. It unseated Google, which had held a decreasing lead since the first RDR Index in 2015.

  • Telefónica shot ahead of all other telecommunications companies. Vodafone, which led in 2018, earned second place, ahead of AT&T, which dropped to third.

  • Facebook maintained fourth place among internet and mobile ecosystem companies, but received a score of just 57% and lagged behind RDR Index leaders in key areas. It showed no evidence of risk assessments on its use of AI or terms of service enforcement, and despite some improvements still disclosed less than a number of its peers about many aspects of how it handles user information.

Click here to view report cards for all 24 companies evaluated by the 2019 RDR Index. An in-depth report analyzing the 2019 RDR Index results across companies and issue areas elaborates on how the world’s most powerful tech companies have a long way to go before the internet supports and sustains human rights for everyone.

“People have a right to know, and companies have a responsibility to show,” said Ranking Digital Rights Director Rebecca MacKinnon. “When companies fail to meet RDR’s standards for disclosure of commitments, policies, and practices, users are exposed to undisclosed risks affecting their freedom of expression and privacy.”

For the full interactive data and analysis, report cards for all 24 companies, methodology, raw data, and other resources for download, please visit: rankingdigitalrights.org/index2019. Follow the conversation on Twitter using the hashtag #rankingrights.

Follow our 2019 RDR Index launch events online and in person:

On May 16th the full results of the 2019 Ranking Digital Rights Corporate Accountability Index will be released online on the RDR website, with key findings to be presented at the Stockholm Internet Forum.

Find out which companies have improved since the 2018 RDR Index—and how. Then join the global conversation about what companies and governments need to do in order to improve the protection of internet users’ human rights around the world.

More details about the timing of our Stockholm launch, and how to follow it online, will be posted on the SIF website and our events page in the coming weeks.

We are also pleased to announce several other launch events in May and June:

May 21, Washington DC: U.S. launch of the 2019 RDR Corporate Accountability Index at New America (9:30am Eastern time).

May 23, Palo Alto, CA: West Coast launch of the 2019 RDR Corporate Accountability Index at the Stanford Global Digital Policy Incubator (1:30pm Pacific time). 

June 11-14, Tunis, Tunisia: 2019 RDR Index session at RightsCon, exact date and time to be announced.

Our events page will be updated with more details about these and other events as they become available.

Subscribe to our newsletter to keep up with our plans and make sure you get the results of the 2019 RDR Index as soon as they are published!

This post is published as part of an editorial partnership between Global Voices and Ranking Digital Rights.

Raqqa, Syria in August 2017. Videos posted by media and rights groups of the war on Youtube started disappearing after the platform introduced a new AI targeting terrorist content. Image via Wikimedia Commons by Mahmoud Bali (VOA) [Public domain]

A new video on Orient News’ YouTube channel shows a scene that is all too familiar to its regular viewers. Staff at a surgical hospital in Syria’s Idlib province rush to operate on a man who has just been injured in an explosion. The camera pans downward and shows three bodies on the floor. One lies motionless. The other two are covered with blankets. A man bends over and peers under the blanket, perhaps to see if he knows the victim.

Syrian media outlet Orient News is one of several smaller media outlets that has played a critical role in documenting Syria’s civil war and putting video evidence of violence against civilians into the public eye. Active since 2008, the group is owned and operated by a vocal critic of the Assad regime.

Alongside their own distribution channels, YouTube has been an instrumental vehicle for bringing videos like this one to a wider audience. Or at least it was, until August 2017 when, without warning, Orient News’ YouTube channel was suspended.

After some inquiry by the group, alongside other small media outlets including Bellingcat, Middle East Eye and the Syrian Archive — all of whom also saw some of their videos disappear — it came to light that YouTube had taken down hundreds of videos that appeared to include “extremist” content.

But these groups were puzzled. They had been posting their videos, which typically include captions and contextual details, for years. Why were they suddenly seen as unsafe for YouTube’s massive user base?

Because there was a new kind of authority calling the shots.

Just before the mysterious removals, YouTube announced its deployment of artificial intelligence technology to identify and censor “graphic or extremist content,” in order to crack down on ISIS and similar groups that have used social media (including YouTube, Twitter and the now defunct Google Plus) to post gruesome footage of executions and to recruit fighters.

Thousands of videos documenting war crimes and human rights violations were swept up and censored in this AI-powered purge. After the groups questioned YouTube about the move, the company admitted that it made the “wrong call’’ on several videos, which were reinstated thereafter. Others remained under a ban, due to “violent and graphic content.”

YouTube’s hasty removal of these videos highlights the problems of using automated tools to flag and remove materials — and why platforms need to be more transparent about their processes for policing content. Even when platforms like YouTube, Facebook, Instagram, and Twitter are clear about what types of content are banned, few provide clear information about what content they remove and why. This makes it difficult for users to understand why content has been removed and how to seek remedy when their rights are violated.

The myth of self-regulation

Companies like Google (parent of YouTube), Facebook and Twitter have legitimate reasons to take special measures when it comes to graphic violence and content associated with violent extremist groups — it can lead to situations of real-life harm and can be bad for business too. But the question of how they should identify and remove these kinds of content — while preserving essential evidence of war crimes and violence — is far from answered.

The companies have developed their policies over the years to acknowledge that not all violent content is intended to promote or incite violence. While YouTube, like other platforms, does not allow most extremist or violent content, it does allow users to publish such content in “a news, documentary, scientific, or artistic context,” encouraging them to provide contextual information about the video.

But, the policy cautions: “In some cases, content may be so violent or shocking that no amount of context will allow that content to remain on our platforms.” YouTube offers no public information describing how internal mechanisms determine which videos are “so violent or shocking.”

This approach puts the company into a precarious position. It is assessing content intended for public consumption, yet it has no mechanisms for ensuring public transparency or accountability about those assessments. The company is making its own rules and changing them at will, to serve its own best interests.

EU proposal could make AI solutions mandatory

A committee in the European Commission is threatening to intervene in this scenario, with a draft regulation that would force companies to step up their removal of “terrorist content” or face steep fines. While the proposed regulation would break the cycle of companies attempting and often failing to “self-regulate,” it could make things even worse for groups like Orient News.

Under the proposal, aimed at “preventing the dissemination of terrorist content online,” service providers are required to “take proactive measures to protect their services against the dissemination of terrorist content.” These include the use of automated tools to: “(a) effectively address the re-appearance of content which has previously been removed or to which access has been disabled because it is considered to be terrorist content; (b) detect, identify and expeditiously remove or disable access to terrorist content,” article 6(2) stipulates.

If adopted the proposal would also require “hosting service providers [to] remove terrorist content or disable access to it within one hour from receipt of the removal order.”

It further grants law enforcement or Europal the power to “send a referral” to hosting service providers for their “voluntary consideration.” The service provider will assess the referred content “against its own terms and conditions and decide whether to remove that content or to disable access to it.”

The draft regulation demands more aggressive deletion of terrorist content, and quick turnaround times on its removal. But it does not establish a special court or other judicial mechanism that can offer guidance to companies struggling to assess complex online content.

Instead, it would force hosting service providers to use automated tools to prevent the dissemination of “terrorist content” online. This would require companies to use the kind of system that YouTube has already put into place voluntarily.

The EU proposal puts a lot of faith in these tools, ignoring the fact that users, technical experts, and even legislators themselves remain largely in the dark about how these technologies work.

Can AI really assess the human rights value of a video?

Automated tools may be trained to assess whether a video is violent or graphic. But how do they determine the video’s intended purpose? How do they know if the person who posted the video was trying to document the human cost of conflict? Can these technologies really understand the context in which these incidents take place? And to what extent do human moderators play a role in these decisions?

We have almost no answers to these questions.

“We don’t have the most basic assurances of algorithmic accountability or transparency, such as accuracy, explainability, fairness, and auditability. Platforms use machine-learning algorithms that are proprietary and shielded from any review,” wrote WITNESS’ Dia Kayyali in a December 2018 blogpost.

The proposal’s critics argue that forcing all service providers to rely on automated tools in their efforts to crack down on terrorist and extremist content, without transparency and proper oversight, is a threat to freedom of expression and the open web.

The UN special rapporteurs on the promotion and protection of the right to freedom of opinion and expression; the right to privacy; and the promotion and protection of human rights and fundamental freedoms while countering terrorism have also expressed their concerns to the Commission. In a December 2018 memo, they wrote:

‘’Considering the volume of user content that many hosting service providers are confronted with, even the use of algorithms with a very high accuracy rate potentially results in hundreds of thousands of wrong decisions leading to screening that is over — or under — inclusive.’’

In recital 18, the proposal outlines measures that hosting service providers can take to prevent the dissemination of terror-related content, including the use of tools that would “prevent the re-upload of terrorist content.” Commonly known as upload filters, such tools have been a particular concern for European digital rights groups. The issue first arose during the EU’s push for a Copyright Directive, that would have required platforms to verify the ownership of a piece of content when it is uploaded by a user.

“We’re fearful of function creep,’’ Evelyn Austin from the Netherlands-based digital rights organization Bits of Freedom said at a public conference.

‘’We see as inevitable a situation in which there is a filter for copyrighted content, a filter for allegedly terrorist content, a filter for possibly sexually explicit content, one for suspected hate speech and so on, creating a digital information ecosystem in which everything we say, even everything we try to say, is monitored.’’

Austin pointed out that these mechanisms undercut previous strategies that relied more heavily on the use of due process.

‘’Upload filtering….will replace notice-and-action mechanisms, which are bound by the rule of law, by a process in which content is taken down based on a company’s terms of service. This will strip users of their rights to freedom of expression and redress…’’

The draft EU proposal also applies stiff financial penalties to companies that fail to comply. For a single company, this can amount to up to 4 percent of its global turnover from the previous business year.

French digital rights group La Quadrature du Net offered a firm critique of the proposal, and noted the limitations it would set for smaller websites and services:

‘’From a technical, economical and human perspective, only a handful of providers will be able to comply with these rigorous obligations – mostly the Web giants.

To escape heavy sanctions, the other actors (economic or not) will have no other choice but to close down their hosting services.’’

“Through these tools,” they warned, “these monopolistic companies will be in charge of judging what can be said on the Internet, on almost any service.”

Indeed, worse than encouraging “self-regulation,” the EU proposal will take us further away from a world in which due process or other publicly-bound mechanisms are used to decide what we say and see online, and push us closer to relying entirely on proprietary technologies to decide what kinds of content is appropriate for public consumption — with no mechanism for public oversight.

Image by Georgejmclittle on Shutterstock

RDR is now seeking feedback on materials that will be used to develop pilot indicators to evaluate internet, mobile, and telecommunications companies on their policies and disclosures related to how targeted advertising affects the human rights of users and their communities.

As we announced last week, RDR is entering an exciting phase as we prepare to expand the RDR Corporate Accountability Index methodology to keep up with the rapidly-changing technology sector and its impact on human rights. After the release of our inaugural 2015 RDR Index, we introduced extensive revisions to update the methodology for the second RDR Index in 2017. However, we have only introduced minor revisions to the methodology since the 2017 RDR Index was released. In 2019 and 2020, we will expand and upgrade the RDR Index methodology to include new company types (such as Amazon and Alibaba), and will add new indicators that will address some of the pressing issues at the intersection of human rights and technology that have emerged since the current methodology was first developed.

Specifically, RDR will work to determine how and to what extent the RDR Index methodology can be expanded to address malicious exploitation of platforms optimized for targeted advertising, as well as the unaccountable and non-transparent application of algorithms and machine learning. We are starting with a focus on targeted advertising and the company practices that it incentivizes, including some uses of algorithms and machine learning.

Why targeted advertising?

Our goal in developing indicators that address targeted advertising is to set global accountability and transparency standards for how major, publicly traded internet, mobile, and telecommunications companies that profit from targeted advertising can demonstrate respect for human rights online. In the future, RDR’s work in this area can inform the work of other stakeholders: investors conducting due diligence on portfolio risk; policymakers seeking to establish regulatory frameworks to protect the individual and collective rights of internet users; and activists looking to encourage companies to pursue alternative business models and to mitigate the human rights harms associated with targeted advertising.

Progress Update

We held our first stakeholder consultation in January in Brussels, where experts on privacy and data protection helped us refine a set of consultation documents that we are now sharing for feedback. We will be convening a series of stakeholder consultations (in person in various locations and via conference call) over the next several months, where we will solicit input from experts in civil society, companies, and government. If you would like to participate in such a convening, please let us know via email to methodology@rankingdigitalrights.org.

Consultation Documents

Consulting with a wide range of experts and stakeholders—including companies that are likely to be evaluated—is key to developing a methodology that is credible, rigorous, and effective. To that end, we have prepared a set of consultation documents that synthesize RDR’s approach to targeted advertising and human rights:

  1. Rationale for RDR’s methodology expansion to address targeted advertising: an overview of why and how the RDR research team is approaching the indicator development process.

  2. Human Rights Risk Scenarios: a list of “risk scenarios,” each describing human rights harms directly or indirectly related to privacy and expression that can result from targeted advertising business models and the choices they incentivize companies to make.

  3. Best Practices: a number of best practices for company disclosure and policy that could help prevent or mitigate these risks.

Send us your feedback

We welcome written feedback by May 31 on these consultation documents. The feedback will help to inform further in-person stakeholder and expert consultations that will take place between April and June, which in turn will inform the drafting of pilot indicators that will be tested later in 2019. Please send all feedback to methodology@rankingdigitalrights.org. We look forward to hearing from you.

Image by Artem Samokhvalov on Shutterstock

The 2019 Ranking Digital Rights Corporate Accountability Index will be released in May. The exact date and location will be announced next month. Watch our website for launch details, or sign up for our newsletter.

The 2019 RDR Index—the fourth RDR Index since the first was launched in 2015—is made possible by the hard work of our research team and active engagement by many of the companies we rank. Meanwhile, RDR has big plans in the works for the next three years.

We plan to upgrade and expand the Index methodology in 2019 and 2020 to address the rapidly evolving, increasingly complex human rights threats that internet users face. The fifth RDR Index will be published in 2021 with an expanded methodology and scope.

RDR has kept its methodology consistent since 2017 in order to track companies’ progress over time and provide companies with predictability. After we publish the 2019 Index this May, we will expand our indicators to address human rights harms associated with targeted advertising, algorithms, and machine learning. We will adapt the methodology to include more company types, especially powerful global platforms with core e-commerce businesses such as Amazon and Alibaba. We will also review the current methodology and research process and consider other potential changes in light of how technology and the companies we rank are evolving. The new methodology will be finalized by mid-2020 so that research can begin for the fifth Index, to be released in 2021.

Public consultation

We have started preliminary research and stakeholder consultations needed to draft indicators addressing human rights harms associated with targeted advertising.  This month we will publish an update about our progress, release our first set of consultation documents, and invite feedback from all interested parties.

As our timeline for the rest of our methodology expansion and revision work progresses, we will continue to post updates and invite participation in the consultation process. The best way to keep up with our progress and plans is to subscribe to our newsletter here.

Organizational growth

In the second half of 2018, we conducted an impact assessment and undertook a process of strategic planning. That process enabled us to sharpen the way we articulate our mission and vision, and theory of change, as well as how we describe our impact.

Our strategic assessment and planning process also enabled us to make some other key decisions about RDR’s priorities for the next three years. In addition to upgrading, strengthening and expanding the Index, we will focus on three other strategic priorities: increase our impact, visibility, and engagement; strengthen organizational structure and capacity; and diversify funding and substantially increase our budget. For more information, please see our new strategic priorities page.

Over the past five years we have proven the value of the RDR Index. Now the time has come to scale up for long-term impact and sustainability. As we enter this new phase, we look forward to working with companies, researchers, civil society advocates, investors, policymakers, and all other stakeholders who share our vision of a global internet that supports and sustains human rights. If you are interested in working with us to take RDR and the Index methodology to the next level, contact us. We look forward to hearing from you.