Photo by Amy Brouillette.

Key findings: Companies are improving in principle, but failing in practice

By Amy Brouillette

From the rampant spread of disinformation about COVID-19, to viral hateful speech amid a global movement against systemic racism, to the “de-platforming” of Donald Trump, 2020 felt like a crash course in the consequences of endowing tech companies with unchecked power.

The past year also put a spotlight on the issues that lie at the very heart of our mission at Ranking Digital Rights: to hold digital platforms and telecommunications companies accountable for their policies and practices that affect people’s rights to freedom of expression and privacy.

In our research for the 2020 Ranking Digital Rights Corporate Accountability Index, none of the 26 companies we rankwhich collectively provide information and communications services to billions of people around the worldcame even close to earning a passing grade on our international human rights-based standards of transparency and accountability.[1]

Our methodology uses 58 indicators across three categoriesgovernance, freedom of expression and information, and privacyto evaluate company commitments, policies, and practices affecting digital rights. In 2020, we applied an expanded methodology that includes new benchmarks for what companies should disclose about their algorithmic systems and targeted advertising practices. We did this in order to better hold companies accountable for the technologies and systems that fuel the Big Tech business modeland that so often drive the spread of disinformation and harmful speech online. We also added Amazon and Alibaba to this year’s ranking, enabling us to assess the human rights commitments and policies of two of the world’s biggest online retailers.

The most striking takeaway is just how little companies across the board are willing to publicly disclose about how they shape and moderate digital content, enforce their rules, collect and use our data, and build and deploy the underlying algorithms that shape our world. We continue to see some improvements by a majority of companies in the RDR Index, and found noteworthy examples of good practice. But these things were overshadowed by a wealth of evidence showing what so many advocates and experts warn is a systemic crisis of transparency and accountability among the world’s most powerful tech giants.

At the root of this systemic crisis, we found remarkably weak corporate governance and oversight of commitments, policies, and practices affecting internet users’ fundamental human rights to privacy, expression, and information. This is especially concerning as both digital platforms and telecommunications companies alike are becoming more reliant on a business model that depends on the collection and monetization of users’ data on one hand, and the use of that data to target and optimize online content for maximum engagement on the other.

Across the board, companies disclosed almost nothing about how they use algorithms, and even less about how they develop and train these systems. They also revealed little information about how their algorithms and targeting systems work together to shape the digital content and ads that people are served.

Our purpose is not only to show how companies are falling short of their human rights commitments, but also to give clear benchmarks that enable them to develop policies and practices that truly respect these fundamental rights. The RDR Index offers a road map for companies’ improvementand we know that even a few steps in the right direction are worth highlighting. This is why we look carefully at companies’ progress year-on-year. In a year like 2020, major changes to our methodology (particularly our new indicators on algorithmic systems and targeted advertising) caused many companies to incur significant score drops. But when we compare companies’ scores on indicators that did not change between 2019 and 2020, we see some noteworthy trends.

In earlier cycles of the RDR Index, companies based in the U.S. and the EU scored the highest, and typically improved the most. Google and Microsoft regularly topped the ranking, and when we began measuring year-on-year progress in 2018, Apple, Telefónica, Twitter, and Vodafone were the most improved companies. But in the past two RDR Index cycles, companies headquartered in other regions have dominated the “most improved” category.

Score changes based on a year-over-year comparison of indicators that remained the same between the 2019 and 2020 RDR Indexes.

For companies in Africa, China, Russia, and the Middle East, 2020 was a year of firsts. Baidu, which operates China’s leading internet search engine, published a human rights policy.[2] While the policy is limited in reach (the company pledges to protect human rights “within the bounds of national law”), this was still an unprecedented step for a Chinese tech company. Mail.Ru, owner of Russian social media site VK, also published a commitment to respect users’ freedom of expression and privacy rights. The two dominant telcos in the Middle East, Etisalat (UAE) and Ooredoo (Qatar), finally published privacy policies. And two companies published transparency reports for the first time: Russian search giant Yandex and South African telco MTN,[3] which also announced a wave of additional improvements on human rights due diligence. These companies all remain in the bottom half of our ranking—they still have tremendous amounts of work to do on human rights. But these improvements are meaningful, and they provide new ways for advocates and users to hold them to account.

Nevertheless, we saw little overall progress on the core issues that are driving so many of the problems we see with our information ecosystems today. On our fifth RDR Index—and in a year when so much was at stake—we were disappointed to find that companies failed to make the kind of substantive changes required to better protect human rights despite having a clear road map for doing so. This was especially evident among U.S. platforms, most of which made only incremental changes to select policies when it is clearer now than ever before that more holistic, systemic reforms are necessary.

Below, we discuss the high and low points of this year’s results and offer recommendations for companies to help bring their platforms and services into alignment with the public interest, in ways that support democracy and human rights.

Company Results

Twitter, the best of the worst

Twitter topped the ranking among digital platforms, earning 53 out of a possible 100 points and edging out Verizon Media (owner of Yahoo) by just one point. This may come as a surprise to some, especially as Twitter has faced criticism for its inconsistent enforcement of its own content rules. This issue was top of mind when the company suddenly suspended Donald Trump’s account following the January 2021 attack on the U.S. Capitol, after it had failed to enforce its rules against him and other public figures for many years. Harvard Law School’s evelyn douek described Twitter’s decision (alongside those of Facebook and other platforms that suspended Trump following the attack) as an “arbitrary and suddenly convenient” application of platform rules.

While Twitter came out at the top of our ranking, we see it as the best among the worst: it scored higher than all other platforms we evaluated, but the bar was dismally low. In comparison to its Silicon Valley counterparts, the company was more transparent about actions it took to remove content and suspend accounts for violations to platform rules. Twitter also earned high marks for its transparency about ad content and targeting rules.

On our indicators that measure companies’ respect for users’ rights to freedom of expression and information, Twitter earned the highest score among digital platforms. It stood out for reporting more data than most other platforms about government demands to censor content. It also published more information about its bot policies than any other platform.

Still, Twitter faltered in key areas. It trailed behind many of its U.S. peers on governance and human rights due diligence in particular, and it failed to publish an overarching commitment to respect human rights in its development and use of algorithms. Twitter also published more information about its bot policies than any other platform.

The company also fell short on critical indicators of security, an issue that has been at the center of various recent high-profile incidents. The company did not offer sufficient information about how it limits and monitors unauthorized employee access to user data, in spite of the 2019 revelations that two Twitter employees had spied on the accounts of thousands of Saudi Arabian dissidents on behalf of the Saudi government.[4] The company did not publish a policy outlining how it handles data breaches, a noteworthy omission in light of last summer’s major security breach in which hackers took control of high-profile accounts, including those of Barack Obama, Joe Biden, and Elon Musk.

Telefónica retained top spot among telcos

Telefónica held onto its top spot in the 2020 ranking, earning the highest governance score of all companies (including digital platforms) by 14 points for its strong human rights commitments. The company made steady progress across a range of different policies we evaluate since we added it to the RDR Index in 2017. Alongside Vodafone, Telefónica was one of only two companies to publish an explicit commitment to respect human rights as it develops and deploys artificial intelligence. It disclosed more comprehensive human rights due diligence practices, and more accessible and predictable remedy procedures, than any other company in the RDR Index.

Telefónica also stood out among telcos for its transparency about government demands to shut down internet services, to censor and block content, and to hand over access to user data and communications.

But having earned only 49 out of 100 possible points overall, the company still has lots of work to do. Telefónica should continue to broaden the scope of its human rights impact assessments to include its policies on targeted advertising and its zero-rating programs. It should also provide users with more control over how their data is collected and used.

Amazon, the worst of the worst

When it comes to profits, the COVID-19 pandemic has been a boon for many of the companies in the RDR Index, especially U.S-based Amazon and its Chinese rival Alibaba, both new to the RDR Index in 2020.

As with all companies we rank, we look at a company’s governance structure (which typically cuts across all operations), and then select specific representative services to evaluate. With Amazon and Alibaba, we chose e-commerce platforms and virtual assistants such as Alexa (what we call “personal digital assistant ecosystems”), along with Amazon’s Drive service.[5]

We decided to add these companies to our roster because e-commerce platforms are not just neutral gatekeepers of online marketplaces: they are active curators of content that billions of people see. And they are expertly engaging in “surveillance capitalism.”[6] As they are selling shoes and books to billions of customers around the world, they are also amassing huge amounts of information about our consumer habits, browsing activities, and locations, all in an effort to keep us shopping.

The recent decision by Amazon Web Services (AWS) to remove “alt-tech” social media site Parler from its servers, following the January 2021 attack on the U.S. Capitol, left us wondering what standards the company used to make this decision. While we did not evaluate AWS in 2020—because our methodology focuses on services that are directly consumer-facing—the policies for services we did assess, we believe, serve as a strong proxy for how transparent Amazon is about such decisions across its services. The company’s move also reinforced our notion that Amazon’s overarching policies and practices merit evaluation against human rights standards.

Indeed, as both companies have provoked public concern about their growing dominance in the global marketplace, they also have faced scrutiny for enabling human rights abuses by government and law enforcement, particularly in the area of software development. Alibaba has come under fire for building and selling facial recognition software made to help track Uighur Muslims in western China. Amazon also has built facial recognition software that it sold to police departments across the U.S., and which MIT researchers found to exhibit racial and gender biases.[7] Although we do not evaluate these products in particular, we see this as a troubling sign when it comes to both companies’ overarching commitments to human rights, or lack thereof.

The results of our evaluation show that our concerns are justified: Amazon earned just 20 points out of a possible 100, ranking dead last among all 14 digital platforms and behind Alibaba. While not surprising given Amazon’s poor track record on human rights mentioned above, our research shows just how far behind the company is on freedom of expression and privacy issues as well.

On privacy, Amazon again earned some of the lowest scores among digital platforms. In spite of the vast amount of information it gathers and stores about its customers, Amazon scored poorly on our indicators measuring what information it collects from users and how it handles, stores, and secures that information. Amazon scored lower than Alibaba in these areas, likely due to China’s recently implemented cybersecurity law and a forthcoming personal data protection law. Not surprisingly, Alibaba disclosed nothing about how it handles government demands for user information, which are ubiquitous in China. Although it was more forthcoming than Alibaba, Amazon disclosed far less than its U.S. peers about how it handles such demands.

It is entirely possible that on privacy and security Amazon has stronger practices than its policies reflect. However, just like all companies, Amazon should publish clear policies on these and all of its practices affecting users’ rights.

On content moderation (an issue that we evaluate with our freedom of expression indicators), Amazon was especially opaque about its decisions to remove materials and content from its e-commerce platform. Indeed, Amazon faced criticism for allowing COVID-19 conspiracy theory books and other materials to be sold on its platform, even after other platforms had cracked down on such content.

Media coverage has shown that Amazon also sporadically rids its platform of certain materials, like when it suddenly went about removing hateful Nazi literature in early 2020. Moves like this beg the question: was the company enforcing its own rules when it did this, or was it reacting to external pressure? But Amazon disclosed little about what content or activities it actually prohibits—and no information about what actions it takes to enforce these rules.

Qatar’s Ooredoo fell last in line

Qatar-based Ooredoo, which is majority-owned by the Qatari government, once again ranked lowest among telecommunications companies in theRDR Index, increasing its score from just five points in 2019 to six points in 2020. This small improvement came because Ooredoo finally made its privacy policy available to the public. The company’s persistent poor performance coincides with steady declines in internet freedom across the Arab region, where internet users face increasing government censorship and surveillance.

Ooredoo did not publish a policy for handling government demands for user data. While the Qatari government may have direct access to the network and to user communications without having to request it, there are no regulations in Qatar that prohibit companies from disclosing their process for handling such demands. Internet service providers should disclose this information so that users can understand the risks of using a particular service.

Ooredoo operates in 13 countries across the Middle East, North Africa, and Asia. Throughout 2019 and 2020, Ooredoo’s subsidiaries imposed network shutdowns at the behest of several governments, notably suspending mobile internet services in Rakhine and Chin states in Myanmar amid clashes between government troops and insurgents. Ooredoo also restricted access to its networks in Algeria and Iraq during anti-government protests, and its local subsidiary continued to block access to VoIP apps, which are banned in Qatar. The company offered minimal transparency about how it handles government demands to restrict services.

Baidu and Mail.Ru adopted human rights policies

Three companies in the RDR Index published human rights policies in 2020: Apple, Chinese search giant Baidu, and Mail.Ru, which runs Russia’s most popular email and social media platforms.

Apple made its first-ever explicit commitment to uphold human rights across its operations, from policies for users to its hardware supply chain. While it had previously published a commitment to respect privacy as a human right, the company had never made a similar commitment for freedom of expression, until 2020. This is part of the reason that Apple ranked last among U.S. companies evaluated in the RDR Index every year since we first added it in 2017.

Data from Indicator G1 in the 2020 RDR Index

Baidu’s policy commits to respect users’ privacy, “except as provided by laws and regulations,” and free speech “in accordance with national laws and regulations,” as the policy clearly is not intended to challenge the Chinese government’s authority. And this should come as no surprise. Why would the company want to jeopardize its legal standing in its home jurisdiction? While it might fall short of international human rights standards, a commitment like this still represents a valuable signal to users (and foreign investors, who are increasingly concerned about human rights-related risks that companies may incur) and can be a tool for advocates aiming to push these companies to do better by their users.

Dangerous algorithms: How did hate and discrimination become so profitable?

Following the police murder of George Floyd last May, protests against systemic racism peaked in the U.S. and around the world, and fervent public discussion arose in parallel about how hate speech proliferates on social media platforms.

In July 2020, more than 1,000 companies—including Coca-Cola, Unilever, Target, and Verizon— pulled their ads from Facebook as part of #StopHateforProfit, a coalition-led campaign calling on the company to stop the spread of white supremacist content, incitement to violence, and messages of voter suppression across its platform, and to build stronger mechanisms for ensuring accountability and transparency.

George Floyd Memorial in Minneapolis, Minnesota, U.S. Photo by Fibonacci Blue via Flickr (CC BY 2.0)

The underlying logic of #StopHateforProfit—that Facebook will continue to allow problematic material for as long as such content is profitable—went hand in hand with the key argument in our spring 2020 It’s the Business Model series, where we showed how targeted advertising and algorithms drive the amplification of misleading and hateful content (and skyrocketing profits) at Big Tech firms, often at the expense of news, democracy, and human rights.

The targeted advertising business model rests on the ability to collect and monetize user data, and then—using algorithms that are developed and deployed to drive reach and engagement—exploit that data by targeting users with tailored messages and content. This is a key reason why often the most salacious, incendiary content runs so rampant on social media.

Results from the 2020 RDR Index demonstrate just how unaccountable tech companies are when it comes to their data-driven business models. None of the social media services we evaluated offered adequate information about how they actually shape, recommend, and amplify both user-generated and advertising content. Digital platforms appear to exercise little control over the technologies and systems that are driving the flood of problematic content online, with no clear accountability mechanisms in place to prevent the cascade of harms to democracy and human rights that are occurring as a result.

For instance, while two European telcos we rank—Telefónica and Vodafone—published explicit policies on AI and human rights, no U.S. platform made any such clear commitments. Microsoft’s Responsible AI Principles and Google’s AI Principles instead focus on ethics, and on concepts such as fairness, accountability, and privacy by design. In contrast to international human rights doctrine, which offers a robust legal framework to guide the development and use of these technologies, ethics initiatives are neither legally binding nor enforceable, and they often reflect the normative values of their creators. Meanwhile, none of the other U.S. platforms we rank—including Apple, Facebook, and Twitter—published any overarching principles at all addressing how they develop and use algorithms.

Data for Indicator G1, Element 3 of the 2020 RDR Index

In our methodology in 2020, we expanded our indicators on human rights impact assessments, asking if companies assess whether their targeted advertising and algorithmic systems could lead to discrimination. Our research showed that no platform in the entire RDR Index clearly disclosed whether it conducts robust, systematic impact assessments to evaluate its algorithms for possible bias or discrimination, despite the hefty volume of research and media attention on algorithmic bias over the past several years.[8] Platforms like Facebook have come under much-deserved criticism for developing and deploying algorithms that adversely affect marginalized populations and vulnerable groups—and for failing to implement systems to prevent these harms from reoccurring. Facebook’s 2020 Civil Rights Audit addressed AI bias but it disclosed nothing about evaluating bias as part of a systematic, formal risk assessment procedure going forward.

Across companies’ policy environments, there was a notable lack of overarching human rights-based commitments or due diligence mechanisms governing how platforms design, train, and deploy algorithms. Few platforms disclosed clear operational-level policies governing how personal information can be used to train algorithms, nor did they lay out clear rules describing how those content-moderation and content-shaping algorithms should be deployed across different services in ways that respect users’ rights to information, expression, and privacy.

Numerous reports have shown how algorithmic curation systems that are optimized for engagement can prioritize and amplify controversial and inflammatory content. Even Facebook’s own internal research showed that its algorithms are responsible for driving divisive, polarizing content. Platforms should therefore adopt clear human-rights centered policies that guide the design and use of these systems to ensure these technologies do not cause these types of harms. They should also be transparent about how they develop algorithms that shape the content that users are served, including about the variables that influence these systems—and give users ways to opt out of receiving algorithmically curated content altogether.

Telcos are spying on you too

All of the telecommunications companies we rank have ventured into the mobile ad market, tapping into the troves of data and insights they have on their customers in an effort to compete with platforms for a slice of the lucrative digital advertising pie.

Telcos profit from this data in a variety of ways, only some of which are known to the public. We know that they serve ads to their subscribers via SMS networks, and that they sell user data to third-party ad companies that can influence what kinds of ads subscribers might see on other digital platforms or apps. Telcos are not simply offering SMS-based ads to commercial marketers. They are also selling or sharing user profiles and data with other parties. Data brokers, ad networks, and even political operatives then use that information to target individuals with personalized ads on other platforms and around the web.

For these reasons, we expect both digital platforms and telcos alike to conduct human rights risk assessments on how their targeted advertising policies could harm the right to information, the right to privacy, and the right to non-discrimination, and to take steps to mitigate those harms. We also expect companies to be transparent about how their targeting systems work and give users clear options to control what data collected about them is used for the purposes of targeted advertising, and whether they even receive targeted content at all. Targeted advertising should be off by default.

Our data showed that telcos were remarkably opaque about these policies. Only a few offered any information on targeting rules and what types of ad targeting is prohibited. And not a single telco reported any data on how it enforces these rules, such as ads removed or accounts suspended for violations. On top of this, telcos were vague about their data collection practices and how they process users’ information for the purposes of targeted advertising.

Yes, they track: Third-party data collection and data inference

Our research showed that most companies ranked in 2020 improved their explanations of how they handle information they collect directly from users—or so-called first-party data .” This type of data can range from personal details a user gives to use a platform or service to “likes” on Facebook and searches on Amazon or Google.

Steady improvements in this area have been driven by a wave of stronger data protection regulations that have come into force in numerous countries across the world. Regulations like the EU’s General Data Protection Directive provide minimum transparency standards for this type of first-party data collection and handling. Since Chinese lawmakers in 2017 introduced more measures requiring companies to be more transparent about these practices, Baidu and Tencent have offered more public information about what types of data they collect directly from their users and why.

Original art by Paweł Kuczyński

But companies revealed little about their more problematic “third-party data” collection practices, which really lie at the heart of the surveillance capitalism business model. Third-party data can be purchased directly from data brokers—companies that specialize in collecting, analyzing, and selling personal information—and can also be collected with tracking technologies, like cookies. Whether through data brokers or tracking, third-party data collection poses major privacy concerns, as these practices (and the companies behind them) are all but invisible to users.

Data brokers have grown to be major players in the digital advertising industry, while remaining largely out of reach of the public and regulators. These companies—which can include credit rating agencies, like Experian, and big data processing and analytics firms, like Oracle and Salesforce—are responsible for collecting, analyzing, and sharing billions of data points on users with a range of different third parties, including governments.

Tracking technologies like cookies are no less invasive, recording clicks, views, and swipes alongside geo-location data, often without users’ direct consent or knowledge.

All of this data gets compiled into larger profiles that can be used to make inferences (or predictions) about a person’s political and religious beliefs, sexual orientation, race and ethnicity, education, income, consumer habits, and physical and mental health. These profiles are the potent secret ingredient used to develop tailored content and messages—including the kind increasingly used for political microtargeting—that we have seen in major elections around the world in recent years.[9]

We found that companies were highly secretive about their third-party data collection practices—and gave only fragmented, incomplete information about what data they collect and for what purpose. Amazon’s privacy notice revealed that it “receives” data about “your interactions with products and services offered by our subsidiaries,” “information about internet-connected devices and services linked with Alexa,” and that it may collect additional information from credit bureaus.

Apple once again stood out for being the only platform we rank that does not track users around the web using cookies or other types of trackers—which is a key reason why it earned the highest privacy score among digital platforms in the 2020 RDR Index. A majority of the remaining platforms we evaluate disclosed that they deploy some kind of tracking technologies, but gave users few ways to opt out of being tracked or to control whether the information collected on third-party platforms or apps can be used for targeting purposes at all.[10]

Content moderation: How transparent are transparency reports?

When tech companies step into the role of censor—and particularly when decisions about content can have a dramatic influence on political and public affairs—it is vital that platform rules are enforced transparently and consistently and in accordance with international human rights standards.

RDR has been pushing companies to be more transparent about their rules enforcement since our inaugural RDR Index in 2015. Our research has consistently pointed to a major gap between companies’ policies and the actual enforcement of these rules—which has left the door wide open to unaccountable and arbitrary enforcement.

In 2015, most U.S. platforms in the RDR Index had already started to regularly publish data on government demands to censor content, but not a single company ranked by RDR reported on content removals or account suspensions as a result of breaches of their own rules.[11]

But by 2016, Google, Microsoft, and Twitter began to disclose trickles of information. In a 2016 blog post Twitter revealed that it had suspended over 125,000 accounts for “threatening or promoting terrorist acts.” Google reported it removed 92 million YouTube videos for violating its terms of service—1 percent of which were for hate speech and terrorist content—but revealed no more information about why the remaining videos were removed.

In 2018, those same companies, plus Facebook, started to report this data more systematically. In the same year, Facebook also released its first-ever Community Standards enforcement report.

The 2020 RDR Index methodology includes new questions that reflect an emerging consensus about content moderation standards, consistent with many of those laid out in the Santa Clara Principles. These indicators ask companies to report much more granular data about content removals and account suspensions. How much and what types of content do they take down? How many accounts do they remove, and on what basis? This kind of information gives civil society, researchers, and the broader public valuable insight into these processes, and enables companies to be more accountable for how they enforce their own rules.

In 2020, seven companies provided some type of data about these actions, in comparison to zero when RDR first started tracking this issue in 2015.Twitter stood out for being far more transparent about its content moderation practices than any other platform we rank—which was a key reason why the company topped the RDR Index this year. Its Rules Enforcement report includes data on accounts it suspended and content removed based on different violations—like hate speech, violence, and child sexual abuse material. The report, however, did not disclose any data on the type of less severe actions, like putting a warning on certain content.

Amazon, Apple, Facebook, and Google were among the companies with the widest gap between their stated rules and proof that they are actually enforcing them. This gap is familiar to digital rights activists, who routinely push these companies to remove harmful content that appears to violate platform rules but nevertheless remains online. But it has become much better known to the public in recent years. Just last month, The Markup reported that both Facebook and Google had run ads for merchandise affiliated with a far-right militia group, despite having banned such ad content.

Although it was not yet active during our research period, we have closely followed the rollout of Facebook’s Oversight Board, made up of independent experts, which will give users a new way to formally appeal Facebook’s decisions to remove or preserve controversial pieces of content. This experiment in internet governance represents an important step in the direction of accountability and transparency for the company, but we remain concerned about key aspects of its execution. The board’s bylaws strictly limit the types of content decisions that users can appeal, failing to adequately acknowledge the role of algorithms in promoting and amplifying problematic speech. And we find it worrisome that the board’s bylaws and operations are not explicitly anchored in universal human rights standards, which apply to companies through the UN Guiding Principles on Business and Human Rights.

Network shutdowns continue, but some telcos are pushing back

While telcos may not have the same power as social media platforms to curate and amplify certain kinds of content and censor others, they provide our gateway to the internet. At the same time, telcos sometimes find themselves under pressure from governments wanting to restrict people’s access to certain websites, and in some cases, to shut networks down altogether.

Take South Africa’s MTN, one of the companies that has improved the most since our last index. In 2020, MTN carried out network shutdowns at the behest of government authorities in Benin, Guinea, Liberia, and in Sudan, where shutdowns were used in an effort to quell pro-democracy protests. Nevertheless, in contrast to previous years, MTN joined a handful of other telcos that commit to push back against such orders. MTN also showed evidence of notifying users when carrying out shutdowns in some cases, which was another positive sign.

Data from Indicator F10 of the 2020 RDR Index

MTN still has a long way to go, but it is encouraging to see these measures of progress, especially for a company that routinely faces shutdown orders.

By contrast, India’s Bharti Airtel retained its low score of 15 points for its lack of transparency about a range of key policies affecting expression and information, and in particular for its process for handling government shutdown demands. Although India has seen more network shutdowns in response to government orders than any other country in the world, Bharti Airtel revealed little information about how it handles these demands.

Zero on zero: Companies are blind to the risks of zero rating

Our expanded methodology for 2020 also looked at zero-rating programs, wherein telcos provide access to certain online services or platforms at no financial cost to the user. Although they are sold as a way to promote internet adoption, zero-rating programs are not only a form of network prioritization that undermines net neutrality principles but also a data trove for companies who offer the service. While also present in the U.S. and Europe, these programs are especially prevalent in developing countries where internet penetration is lower.

A mobile phone shop in Lusaka, Zambia. Photo by Mike Lee via Flickr (CC BY-NC-ND 2.0)

When a platform is prioritized this way, there is significant danger of it dominating the digital landscape. This has proven true in the case of Facebook, which has pursued far more zero-rating agreements and other forms of content prioritization than any other platform we rank. These programs have generated enormous criticism for monopolizing connectivity and digital communication markets, and limiting users’ access to things like independent media. When paired with Facebook’s lackluster content moderation, zero rating has been cited as a factor contributing to sectarian violence, most notably ethnic cleansing operations carried out by Myanmar’s military.

We looked for evidence that telcos and digital platforms assess the human rights risks of their zero-rating programs, if these companies offer such a service. Many of the telcos we rank do operate such programs, and not a single company disclosed evidence of assessing whether they could be discriminatory, or could otherwise harm users’ rights to expression, information, and privacy. No platform showed any evidence of conducting human rights due diligence on these programs, despite the overwhelming potential for human rights harms that these programs can cause.

On human rights, companies are talking the talk, but not walking the walk

A decade since the UN endorsed new guidelines aimed at strengthening corporate responsibility for human rights, we can reflect on the progress tech companies have made in fulfilling these commitments—and where they are still falling short.

The U.N. Guiding Principles on Business and Human Rights, endorsed by U.N. member states in 2011, sets out the foundation for strong governance and oversight of human rights through its “protect, respect, and remedy” framework. In order to demonstrate respect for human rights, companies should make public commitments to human rights, conduct robust due diligence to identify and mitigate human rights harms, and provide remedy to address negative consequences of harms should they occur.

Are the companies we rank meeting these commitments? As noted above, a growing number of companies are making formal commitments to human rights. But our research shows troubling gaps in other areas. For instance, with Apple having finally joined the pack, all the U.S. companies in the RDR Index performed relatively well in 2020 when it came to their commitments to human rights principles. But most scored poorly—often on par with their counterparts around the world—when we looked at how these commitments are implemented in practice, such as through human rights due diligence, regular engagement with civil society, and remedy mechanisms for addressing human rights harms.

Companies across the board were weakest on human rights due diligence, with most failing to demonstrate that they conduct robust, systematic assessments to identify and mitigate the human rights risks of their policies and practices across their global operations. By “robust” and “systematic” we mean that companies should integrate key accountability mechanisms into the risk assessment process. This includes conducting additional evaluations whenever risks are identified, ensuring that key senior leadership review and consider results of the assessment in their decision-making, and having assessments audited by an independent third party.[12]

For companies that scored anything at all on our due diligence indicators, it was most often for providing some evidence of conducting assessments on risks to freedom of expression and privacy rights related to government demands in the various jurisdictions in which they operate. Few companies have broadened the scope of their due diligence beyond this, to include assessments of their own policies and practices, especially those, like targeted advertising, with such clear human rights risks and implications.

One of the more concerning findings of our research this year is how few companies disclosed any evidence of conducting risk assessments on their targeted advertising policies and practices—a finding we also highlighted in 2019. The 2020 RDR Index applied an expanded indicator on this issue to evaluate whether companies are conducting human rights due diligence on their broader business model—and specifically, to see if companies are conducting risk assessments on how ad targeting could be discriminatory, or pose threats to users’ rights to expression and privacy.

Not a single company in the entire RDR Index disclosed anything about assessing freedom of expression or privacy risks related to their targeted advertising policies and practices. Of the 26 companies we evaluated, only Facebook revealed that it conducted a very limited assessment of discrimination risks associated with targeted advertising in the U.S. market.

Companies that are members of the Global Network Initiative (GNI) performed slightly better than the non-GNI members on our human rights due diligence indicators, owing to their more thorough assessments of risks related to government demands in the different jurisdictions in which they operate. On our new risk assessment indicators evaluating targeted advertising and algorithms, GNI members performed no better than non-GNI members. This reflects the GNI’s narrow focus on government demands, to the exclusion of other key issues affecting online rights.

When it came to remedy, we saw little change in 2020. Once again—with the exception of Telefónica—most companies failed to offer clear, predictable remedy to users who feel their freedom of expression and privacy rights have been violated.

Data from Indicator G6a of the 2020 RDR Index

In sum, companies still have much work to do in order to demonstrate their human rights commitments. But this does not diminish the importance of making these types of explicit and formal human rights commitments. These are the bedrock of a company’s public human rights obligations across its global operations, and an essential first step in building strong corporate governance over these issues. But making a public commitment to human rights alone is not enough to ensure that these rights are actually being protected in practice.

Concluding thoughts

Looking back, forging ahead

Since we launched the first RDR Index in 2015, the number of companies pledging to protect users’ freedom of expression or privacy, or both, has steadily grown every year. The number of companies that conduct any type of human rights due diligence has grown every year as well. But as we show throughout the 2020 RDR Index, even companies that rank toward the top have a long way to go when it comes to implementation, and there is much we do not know about how companies actually operate in practice.

As a result, people around the world still lack basic information about who controls their ability to connect, speak online, or access information. Billions of internet users are largely in the dark about who has the ability to access their personal information and under what circumstances. Governments are facing legitimate threats perpetrated by extremists, and other governments seeking to weaken or divide them, but sometimes fail to fully consider the human rights implications of their responses. While some regulations have forced companies to improve their protection of users’ rights, other regulations have made it harder for companies to meet global human rights standards for transparency, responsible practice, and accountability in relation to freedom of expression and privacy.

Unfortunately, many laws and regulations around the world have hampered companies’ performance in the RDR Index over the years. Worse, many governments frequently force companies to take actions that violate users’ human rights, such as censoring and surveilling human rights defenders and investigative journalists. Laws in China and Russia prevent companies in those countries from disclosing key information about surveillance demands by authorities. In India, the law prevents telcos from disclosing who ordered network shutdowns, or for what reason. National security laws in many parts of the world, including in the U.S. and in many European countries, require companies to collect and hold user information for excessive periods of time and prevent companies from being fully transparent about government demands for that information.

In North America and Western Europe, we are also seeing a spate of initiatives focused on holding platforms legally liable for harmful content online. As we argued at length in our 2020 series, It's the Business Model, laws that increase platforms’ liability for user content can be especially counterproductive because they give companies even more decision-making power to censor speech and information in ways that may not be compatible with human rights. Instead, we made an explicit call for U.S. lawmakers to enact regulations rooted in human rights standards that would rein in digital platforms’ data collection and monetization practices, and impose robust transparency mandates to force companies’ algorithmic systems into the light. If U.S. lawmakers act on these recommendations, the effects could be global, given the enormous reach of Silicon Valley’s titans and the degree to which companies like Facebook have made their services all but essential to people’s abilities to communicate.

Even when faced with challenging regulatory environments, companies must be more proactive about protecting users and informing them about the ways that their rights might be curtailed when using specific platforms or services. We are happy to report that some of the improvements that companies have made over the past five years have come about in the absence of laws or regulations that compel them to make changes. This shows that there are other drivers of improvement, apart from regulation, that can result in meaningful change. Sustained media attention, public opinion, pressure from civil society, and investor engagement all play a role in pushing companies and governments alike toward more rights-respecting policies. ESG investors concerned with social and governance risks—who control a growing proportion of funds invested in stock markets around the world—are also starting to push companies to improve their commitment to and respect for human rights.[13]

We are also encouraged to see that new laws in some places have forced companies to make notable improvements since 2015. Regulation in Europe, California, and even China ( see our China spotlight) has driven improvements in companies’ disclosed policies and practices related to users’ privacy and security. Half of the 26 companies we ranked in 2020 disclosed a policy about how they handle data breaches—up from just three when we first introduced an indicator evaluating this issue in 2017. And as we noted in the 2019 RDR Index, a majority of companies have improved their privacy scores as they revised their policies to comply with the stronger data protection rules of the European General Data Protection Regulation (GDPR)—although companies with the highest privacy scores in the RDR Index are those that go beyond the GDPR’s minimum standards.

Regulation that is firmly grounded in international human rights standards can force companies to make further changes. But companies have resisted such changes because they often clash with their fundamental business model.

We are encouraged, for example, by the EU’s forthcoming Digital Services Act (DSA).[14] In what will be the most sweeping overhaul to Europe’s internet regulations in 20 years, the draft legislation contains a number of measures aimed at boosting transparency and accountability by platforms about how they manage and govern content. Provisions include mandatory risk assessments and requirements for companies to explain how algorithms are used to shape and rank content—in alignment with standards promoted by RDR. Still, proposed measures on risk assessment are vague, and there are key questions over provisions that could give platforms even more unchecked decision-making power to remove content[15](see more of our policy recommendations).

Meanwhile, and as we have consistently emphasized, companies that are fully committed to protecting and respecting their users’ rights should not wait for regulations to force them to act. Where law is absent, unclear, or insufficient, RDR’s indicators can be used to guide company policy and practices. While our individual company report cards offer specific recommendations for each company in the 2020 RDR Index, our general company recommendations below are intended for all digital platforms and telecommunications companies, regardless of whether we were able to include them in the RDR Index.

Footnotes

[1] See findings from previous RDR Index cycles, including the 2019 Ranking Digital Rights Corporate Accountability Index, https://rankingdigitalrights.org/2019index/

[2] Baidu’s human rights policy was published in November 2020. As the research period ended on September 15, 2020, the report was not accounted for in Baidu’s 2020 RDR Index scores, but will be included in the next index cycle.

[3] MTN’s transparency report was published in November 2020. As the research period ended on September 15, 2020, the report was not accounted for in MTN’s 2020 RDR Index scores, but will be included in the next index cycle.

[4] In 2019, it was revealed that the two Twitter employees had breached the company’s internal systems to access the information of thousands of accounts that had criticized the Saudi Arabian government. The incident came to light when the U.S. Department of Justice charged the two men with “acting as illegal agents of a foreign government.” See U.S. Department of Justice press release, “Two Former Twitter Employees and a Saudi National Charged as Acting as Illegal Agents of Saudi Arabia,” November 7, 2019. https://www.justice.gov/opa/pr/two-former-twitter-employees-and-saudi-national-charged-acting-illegal-agents-saudi-arabia

[5] RDR did not evaluate Amazon Web Services in the 2020 RDR Index, because our methodology focuses on services that are directly consumer-facing. We plan to explore adding more of Amazon’s services in future index cycles.

[6] The predominant business models of the most powerful American internet platforms are surveillance-based. Built on a foundation of mass user-data collection and analysis, they are part of a market ecosystem that Harvard professor Shoshana Zuboff has labeled surveillance capitalism. See Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: Public Affairs, 2019).

[7] Amazon declined to submit its software for an audit by the National Institute of Standards and Technology, maintaining that its internal audits had revealed no evidence of bias. Months after MIT’s Joy Buolamwini and Deb Raji published their study, and at the height of protests against systemic racism in the U.S., Amazon vowed to stop selling this software to law enforcement for one year’s time.

See Joy Buolamwini, “Response: Racial and Gender bias in Amazon Rekognition—Commercial AI System for Analyzing Faces,” Medium.com, January 25, 2019, https://medium.com/@Joy.Buolamwini/response-racial-and-gender-bias-in-amazon-rekognition-commercial-ai-system-for-analyzing-faces-a289222eeced; and Karen Weise and Natasha Singer, “Amazon Pauses Police Use of Its Facial Recognition Software,” New York Times, June 10, 2020, https://www.nytimes.com/2020/06/10/technology/amazon-facial-recognition-backlash.html

[8] Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: NYU Press, 2018).

[9] The U.S. and Kenya are two countries where there is ample evidence of technology and data analysis firms using political microtargeting on behalf of major political campaigns. See

Njeri Wangari, “Data and Democracy: What Role Did Cambridge Analytica Play in Kenya's Elections?,” Global Voices, November 3, 2017, https://advox.globalvoices.org/2017/11/03/data-and-democracy-what-role-did-cambridge-analytica-play-in-kenyas-elections/; and Anthony Anthony, Matthew Crain, and Joan Donovan, Weaponizing the Digital Influence Machine: The Political Perils of Online Ad Tech (New York:Data & Society Research Institute, 2018), https://datasociety.net/wp-content/uploads/2018/10/DS_Digital_Influence_Machine.pdf

[11] See page 25 of Ranking Digital Rights 2015 Corporate Accountability Index (Washington, DC: New America, November 2015), https://rankingdigitalrights.org/index2015/assets/static/download/RDRindex2015report.pdf

[12] See indicator G4a for RDR standards for robust, systematic risk assessments: https://rankingdigitalrights.org/2020-indicators.

[13] Nathalie Maréchal, Rebecca MacKinnon, and Jessica Dheere, Getting to the Source of Infodemics: It’s the Business Model (Washington, DC: Ranking Digital Rights at New America, May 2020), https://www.newamerica.org/oti/reports/getting-to-the-source-of-infodemics-its-the-business-model/good-content-governance-requires-good-corporate-governance/

[14]Proposal for a Regulation of the European Parliament and of the Council on a Single Market For Digital Services (Digital Services Act) and Amending Directive 2000/31/EC (Brussels, Belgium: European Commission, December 15, 2020), https://ec.europa.eu/info/sites/info/files/proposal_for_a_regulation_on_a_single_market_for_digital_services.pdf.

[15] See critiques by Center for Democracy and Technology (CDT) here and ARTICLE 19's first reaction here.

Support Ranking Digital Rights!

Tech companies wield unprecedented power in the digital age. Ranking Digital Rights helps hold them accountable for their obligations to protect and respect their users’ rights.

As a nonprofit initiative that receives no corporate funding, we need your support. Help us guarantee future editions of the RDR Index by making a donation. Do your part to help keep tech power in check!

Donate
Read more:
Context before code

Protecting human rights in a state of emergency

Recommendations

Our guidance for companies and policymakers committed to protecting and promoting human rights online

Compare services

See how company scores on specific services compare