Companies are stonewalling their users when it comes to how they develop and deploy algorithmic systems and infer data.
In our last ranking, we debuted new standards for disclosures about the development and deployment of both algorithmic and targeted-advertising systems. No company scored well on these standards. In fact, their inclusion in the ranking brought most companies’ scores down. Our current ranking, then, offers the first chance for us to look at whether companies have made any progress on these indicators. The answer is a resounding no.
To assess companies’ transparency in these areas, we ask whether they commit to human rights and to conducting human rights due diligence in the development of these systems and whether they make policies on algorithmic systems and targeted advertising available. Of all companies, only Microsoft earned any credit for providing access to algorithmic system development policies, the result of LinkedIn (new to our ranking this year) providing vague explanations of how it uses user data to develop machine learning models and how the company addresses bias in large-scale artificial intelligence (AI) applications.
Companies did somewhat better in disclosing information about how they use algorithms to curate, recommend, and rank content, but in most cases they stop short of saying what kind of controls users have over them. Similarly, while some companies provide some information about how and what they infer about users from data they collect, no company discloses that it limits the data that can be inferred to what is necessary to provide the service offered.
Regardless of jurisdiction, companies earn the most credit on our security indicators, but policies on data breaches can be hard to find.
For 20 out of the 43 services we evaluated, we could not find any description of the company’s processes for responding to a data breach. In April 2021, more than 500 million Facebook users’ sensitive personal data, like email address or location information, was leaked. Yet the only information the company discloses related to data breaches pertains to its response to the Cambridge Analytica scandal. It says nothing about whether policies and practices are in place that can systematically address a data breach when it occurs.
Meta was not the only company that failed its users on this issue. Microsoft lags, too. And both top-ranked Twitter and bottom-ranked Amazon, along with Google, VK, and Samsung, failed to provide any explanation of if and how they notify authorities or users about data breaches. Notably, two Chinese companies we rank, Alibaba and Baidu, earned the highest scores.
Spotlighting Services: E-Commerce, Virtual Assistants, and LinkedIn
Amazon’s rock-bottom score is an outlier among U.S. companies. To catch up, it must make transparency a priority, particularly around enforcement of its own rules.
In 2020 we added e-commerce giants Amazon and Alibaba to our ranking. Despite being headquartered in very different legal and political environments, both companies placed at or near the bottom of our ranking. And not much has changed this year, leading us to ask why the e-commerce companies are scoring so poorly. Does it have to do with the nature of the services we’re evaluating—e-commerce and virtual assistants—or something else?
We reviewed the companies’ performance across our categories and noted that both ranked at (Alibaba) or near (Amazon) the bottom on governance, along with another Chinese company, Tencent. We then looked more closely at our evaluations of each company’s e-commerce services, Amazon.com and Taobao.com. Amazon edges out Taobao on governance but lags behind the Chinese service in our freedom of expression and privacy categories. We noted a similar pattern in reviewing the scores for virtual assistants: Amazon’s Alexa outpaces Alibaba’s Aligenie but trails Apple’s Siri and Google Assistant. In every case, we noted that while Alibaba’s and its services scores are aligned with those of its peers in China and other jurisdictions outside the U.S., Amazon is an outlier when compared to other U.S. companies, falling far behind them in all our categories.
If e-commerce isn’t the cause of the lower scores, the question is, why doesn’t Amazon do better?
“Whether e-commerce, virtual assistants, or social media, the targeted advertising business model is still at the root of some of the internet’s worst human rights harms.”
Amazon.com scores lower on most of our standards than its U.S. peers, but one area stood out: censorship and content governance. The service did not share any information about how it responds to government demands or private requests to censor content or restrict accounts. It earned the lowest score (20%) among all platforms we rank on our standard which asks companies to explain their processes of enforcing their own content rules.
Only Amazon can explain why it has neglected these policy areas. We do know it is not a matter of resources. In February this year, Amazon reported record profits and disclosed its advertising revenue—USD $31.2 billion in 2021—for the first time. Advertising is its third-largest source of income, after cloud computing and e-commerce. With its massive store of first-party data, the company is moving into the ad business, competing with ad-tech giants, like Meta and Google, who outscore Amazon on transparency around ad content and targeting policies, too. Whether e-commerce, virtual assistants, or social media, the targeted advertising business model is still at the root of some of the internet’s worst human rights harms.
Apple’s and Google’s virtual assistants outperform Amazon’s and Alibaba’s in our ranking but they do not disclose how their algorithms work.
Last year was also the first time we evaluated virtual assistants Alexa (Amazon) and AliGenie (Alibaba), which we then called “personal digital assistants.” This year, we added Apple’s Siri and Google Assistant to the ranking, and the results mirrored our overall findings for their parent companies. Siri and Google Assistant were significantly more transparent than Alexa or AliGenie, even after their improved performances this year. AliGenie disclosed more about how it uses algorithmic systems to moderate content, and Alexa provided new information about its use of user data to train its algorithms. Only AliGenie disclosed anything about how it uses its vast troves of information about users’ interests, hobbies and browsing habits to algorithmically recommend content, and products.
Virtual assistants are a growing concern for privacy and for freedom of expression and information, all the more so because companies disclose so little about how they work. Most virtual assistants need to listen constantly for a “wake word,” meaning they sometimes record things they should not. In a U.S. class action lawsuit, plaintiffs accused Amazon’s Alexa of accidentally recording their conversations without being prompted, and Apple faced a similar suit regarding Siri. We know very little about what companies do with that data, though we can safely assume that the end goal is financial profit. And when virtual assistants are connected to “smart homes,” their opaque AI risks providing a vulnerable entry point for hackers.
The voice interface also changes how users interact with information in subtle ways, with potentially far-reaching consequences for users’ rights. Virtual assistants return queries with a single answer, which users tend to interpret as empirical truth. In contrast, text-based search engines provide multiple results, so users can easily see that their question has many possible answers. This single answer is concerning from multiple perspectives, including competition and safety, particularly for virtual assistant systems that offer third-party applications.
LinkedIn offered some explanation of how its algorithms work but failed to disclose much information on content removals and third-party requests.
This year, we evaluated Microsoft’s LinkedIn platform for the first time. Because it connects employers to potential employees, the decisions its algorithms make when organizing content can directly impact users’ livelihoods. It was one of the only services to offer some information about how it processes user data to develop machine learning models and how the company addresses AI bias.
It was the latest U.S.-based social media platform to pull out of the Chinese market, after the government forced it to censor activists and journalists in 2021. Though LinkedIn’s policies were more transparent and rights-respecting than the Chinese and Russian social media platforms we evaluated, it lagged behind U.S. peers Facebook and Twitter, as well as most of Microsoft’s other services. The platform, which has nearly a billion users, was particularly weak on freedom of expression, failing to fully explain the circumstances under which it removes content in line with internal rules or external requests, or the volume of these restrictions.
Charting the Future of Big Tech Accountability
Whereas governments and their institutions have obligations to their citizens and to enacting mechanisms that ensure transparency in decision-making, companies are notoriously opaque. They argue that disclosing too much about their policies, revenues, and technologies can compromise their competitive position, and therefore jeopardize their shareholders’ returns.
Pushing a company to change requires understanding how it operates first. This is what RDR’s research methodology is designed to do: to show whether a company incorporates consideration of fundamental human rights into its policies and practices. With what we learn, we can then engage a range of actors to hold companies accountable for both what they say they do and for what we think they ought to do. Finally, we must connect our work to their driving force, their bottom lines.
Over the last year, a growing group of civil society actors, including shareholders and institutional investors, whistleblowers, policymakers, and former employees, have scored big wins against Big Tech. Below we note three areas where momentum is building and with it, hope for change.
Companies’ transparency on human rights due diligence, a key to avoiding human rights harms, is exceptionally poor.
Only Microsoft earns a perfect score on any of our human rights due diligence indicators—for its transparency on assessing the impact of government policies on freedom of expression and privacy. Yet it discloses much less about any due diligence it conducts on its own operations. This disparity is part of a broader trend: several companies—Apple, Tencent, Twitter, Yandex—published snippets of new information about how they assess the impacts of government policies on freedom of expression and privacy. But across the board, companies showed far less interest in examining the risks posed by their own products and business operations.
Take targeted advertising. Despite the clear human rights harms that stem from targeting systems, not a single company has announced a comprehensive human rights impact assessment of the mechanisms it uses to tailor ads to its users. In fact, none of the 14 global Big Tech platforms has improved in this area at all over last year. Meta’s civil rights audit remains the only assessment that comes close, its impact diluted by its limited scope, which covered only the potential discriminatory effects of targeting and which was centered on the U.S. while largely disregarding harms caused elsewhere.
The same opacity permeates what companies say about the human rights due diligence that goes into developing and deploying algorithms. While regulatory efforts in the EU and China are afoot to harness unaccountable algorithms, a focus of growing scrutiny by shareholders, neither has yet compelled companies to expand their transparency on these fronts.
Still reckoning with the business model: to fix the internet, we must first fix online ads.
For the second year in a row, none of the 14 companies we rank earned more than 50% of the possible points on our targeted advertising indicators. Companies typically have rules for ad content and for ad targeting, but independent research suggests that they sometimes do a bad job at enforcing these rules. If companies were more transparent about how they enforce their rules (and what technologies lie behind their processes), we would know more. But we know from our own research that among the industry’s leaders, there is virtually no transparency reporting about ad policy enforcement. (TikTok, which we don’t rank, discloses the raw number of rejected ads.)
Apple, a company whose public rhetoric vaunts its commitment to privacy, came in last, with 19.82%. This is hard to reconcile with its very public war on third-party tracking—and the surveillance advertising it enables—through its Ad Tracking Transparency program, which Meta says will cost the social media giant $10 billion in yearly ad revenue, even as Apple’s own ad revenue skyrockets. Our data underscores the fact that Apple needs to come clean about its own ad business.
Companies fared even worse on algorithmic transparency, where the highest score was Yahoo’s 22.45%. Algorithmic systems are the beating heart of the Big Tech business model: without automation, platforms cannot hope to achieve the global scale and market dominance that is key to their astronomic profits. Civil society, investors, policymakers, and the public all clamor for basic transparency about these systems that impact every facet of our lives, to no avail.
ESG investing and shareholder action are tying rights to risks—and companies’ bottom lines.
The human rights impacts of technology have become glaring enough to shake up the financial markets that give Big Tech companies life. Shareholders have emerged as a powerful voice in the push for corporate accountability in the tech sector—and often as powerful allies of the human rights community. But the chips are often still stacked against them in their efforts to press for change.
So far, 2022 has been a banner year for shareholder activism. As of February, members of the Interfaith Center on Corporate Responsibility, a major coalition of shareholders and allied organizations seeking to promote responsible corporate behavior, had filed more than 400 shareholder proposals for the 2022 proxy season. Retail and institutional investors with stock in the targeted companies will vote on many of these proposals at each company’s annual meeting.
A great deal of this momentum has been fueled by the meteoric rise of ESG (environmental, social, and governance) investing. ESG investors seek to determine how well companies are fulfilling their responsibilities as stewards of social and environmental good. But without a strong foundation, ESG can easily turn into ethics-washing. That is why investors are coalescing around human rights standards as the ideal benchmark with which to assess the risks to society that companies generate or enable. Among shareholders, RDR’s standards are becoming widely shared criteria for evaluating tech companies’ transparency and for pinpointing cases where their declarations deviate from their deeds.
Case in point: last year, investors representing nearly USD $6 trillion in assets signed an Investor Statement on Corporate Accountability for Digital Rights challenging companies ranked by RDR to make specific improvements identified by our team. This was the capstone to our long-standing collaboration with the Investor Alliance for Human Rights, which rallied investors to the cause. Since then, new signatories have stepped up and the value of their collective holdings has increased to $9 trillion.
This year we also broke new ground by directly supporting shareholder resolutions at Alphabet and Meta challenging the two tech titans to assess the human rights impacts of their targeted advertising systems. The resolution aimed at Meta will be one of a dozen that the company’s shareholders will vote on at the end of May. Investors have continued to cite our standards in resolutions targeting Apple and Twitter this proxy season; dozens more align with our standards. In particular, resolutions demanding that companies conduct human rights impact assessments and improve transparency reporting have catapulted to the mainstream, bending juggernauts such as Microsoft to make new commitments and winning the support of a critical mass of shareholders at Apple.
But the barriers to effective shareholder action on human rights remain enormous. Chief among them are multi-class stock structures. Employed by tech giants like Alphabet, Meta, and Snap, these structures concentrate power in the hands of a small clique of founders and insiders by granting them inflated voting power relative to ordinary shareholders. Ultimately, this gives company leadership the ability to deflect calls for accountability, even if those calls enjoy overwhelming support. Multi-class structures are hallmarks of poor corporate governance, entrenching unaccountable leadership, disenfranchising shareholders, and shifting the risks of a company’s dereliction onto the public. For all these reasons, RDR advocates for dismantling multi-class structures and reversing a set of rules that further stifle shareholders’ ability to hold companies accountable in the U.S.
Broad neglect of human rights due diligence by U.S. tech giants continues to exacerbate harms in the majority of the world, which despite comprising the largest proportion of users by far receives the fewest resources for trust and safety.
When Russia invaded Ukraine in February, tech companies were swift to respond. Among other measures, Meta devoted extra staffing to content review, YouTube blocked “Russian state-funded media,’’ and Apple disabled traffic and live incidents features for its maps application in Ukraine. This stands in striking contrast with companies’ inaction and slow responses to crises in other parts of the world, such as incitement to violence and hate speech in Ethiopia’s ongoing conflict.
This inconsistency is exacerbated by two factors. First, human rights due diligence on operations and implementation of policies in the Global South are lacking and conducted mainly after the fact, as in the case with Facebook’s long-overdue decision to conduct a human rights-based assessment of its impacts in Palestine after the period of escalated violence that took place in May and June of 2021. Second, there is an unequal allocation of resources for content moderation outside the U.S. and Western Europe. For instance, according to the Facebook Files, Meta allocates 87% of its budget for combating misinformation to issues and users based in the U.S.
With the exception of Amazon, all U.S. tech companies ranked by RDR had relatively strong commitments to human rights, disclosed governance practices and management oversight over these issues, and had in place employee training and whistleblower programs to implement their human rights commitments. Yet, without strong human rights due diligence and more equal distribution of resources for content moderation across the world, users in the Global South will continue to bear the brunt of inconsistent implementation of tech company policies.
Companies are engaging more with civil society and investors but ignoring the need for engagement on algorithms and ads, and they are neglecting users’ rights to remedy.
Faced with mounting pressure from policymakers seeking to regulate them, shareholders concerned about the material risks stemming from their governance and operations, advertisers worried about brand safety, and workers fed up with feeling complicit in their harms, companies are gradually stepping up their engagement with stakeholders, including RDR.
Each year, as a part of our research methodology, we offer companies an opportunity to review our preliminary results and make arguments—supported by evidence that meets our criteria—that they should earn credit where we saw none. Platforms are increasingly providing constructive feedback on these results, recognizing that it can lead to an improvement in their scores.
This year, every platform we rank except the Chinese ones and, perplexingly, Google offered such feedback. While Alibaba, Baidu, and Tencent have fewer incentives to engage with the human rights community, Google’s lack of input is an anomaly among U.S. platforms for which we have no explanation. It is also deeply concerning, given the power the company has to shape our information environment through its dominant search and advertising services.
Our standards set a bar for companies to regularly discuss freedom of expression and privacy with a range of interlocutors. We ask whether a company participates in a multistakeholder initiative with a credible assessment mechanism, like the Global Network Initiative (GNI), or whether it discloses any other kind of systematic engagement with non-industry, non-governmental stakeholders. This year, no company earned more than half credit, and six companies, all based outside the U.S., earned nothing at all.
Users have a right to contest decisions made about their content and accounts. Every digital platform should maintain open channels through which users can voice their concerns and seek remedy when a platform causes harm, without special treatment for VIPs. Yet our data shows that companies are still failing to prioritize remedy mechanisms.
Although this year did not feature the kind of dramatic collapse that disabled Meta’s content moderation appeals system, companies’ remedy policies largely stagnated. Since our last round of evaluations, when we broke out an indicator on content moderation appeals, we have noted virtually no improvements in this area. Companies are still tight-lipped about whether they notify users whose content is restricted, how long it takes to address an appeal, and what role human reviewers and algorithms play in the process.
This lack of attention to upgrading remedy policies is especially disappointing at a time when human rights actors are increasingly coalescing around the importance of these disclosures. It is galling enough in times of peace, but armed conflict rendered millions of voices voiceless, from Afghanistan to Myanmar to Ukraine. Under these conditions, reliable grievance channels are of paramount importance.
Return to the Scorecard