Key Findings from the 2022 RDR Big Tech Scorecard 

By Afef Abrougui, Jessica Dheere, Nathalie Maréchal, Zak Rogoff, Jan Rydzak, Veszna Wessenauer, and Jie Zhang

 

Return to the Scorecard

 

Once again, none of the 14 digital platforms we evaluated earned a passing grade. While the overall average of companies’ scores in our ranking ticked up slightly this year, such incremental progress, while encouraging, is far from enough. We had hoped for more, given the widespread recognition of how companies’ governance and operations, and particularly their business models, are corrupting our information environments, compromising human rights, and undermining our democracies.

“companies are content to conduct business as usual when the state of the world demands anything but”

In short, their lackluster improvement shows that when it comes to aligning their policies and practices with human rights–based standards and their obligations under the UN Guiding Principles, companies are content to conduct business as usual when the state of the world demands anything but. If there’s one recommendation we have for every company we rank, it is: accelerate your efforts to develop and implement rights-respecting policies and practices across your operations.  

This marks the sixth edition of our rankings, formerly known as the RDR Corporate Accountability Index, and the first time we have looked at digital platforms separately from telecommunications companies. We will release our Telco Giants Ranking this fall. During our research process, we ask about more than 300 aspects of company and service policies and practices, generating hundreds of thousands of data points. It’s an immense amount of data where we mine for gems of insight that can help us and others identify focal points for new corporate accountability research and advocacy. Still, it can be hard to filter without a little guidance. Below, we offer help, noting what’s changed, what hasn’t, and the trends that seem worth highlighting. New this year, we call out scores on specific services, including e-commerce, virtual assistants, and Microsoft’s LinkedIn, a newcomer to the ranking. 

We conclude with a look at some pressing problem areas and our strategies for addressing them, including the absence of meaningful human rights due diligence, the need to rein in surveillance-based advertising, and how Big Tech has long neglected the majority of its users, who live outside the West. We also present some causes for hope, including the potential of ESG investing on companies’ policies and practices and the success of recent shareholder actions.

We do not expect that every company we rank will earn a perfect score in our rankings, though that would be nice. What we do expect, however, is notable and steady improvement year over year. Companies can no longer feign ignorance of the negative impacts that their technologies can have on our rights and our democracies. Whether during elections, pandemics, uprisings, or even war, they now have extensive data and experience from which to draw, and they must apply both to craft policies and practices that respect and promote the rights of all. Our key findings for this year’s Big Tech Scorecard aim to help focus their efforts where we see the most potential for harm and our best opportunities to advance corporate accountability in the tech sector.

Company scores: What’s changed since last year? What hasn’t?

At first glance, this year’s ranking looks eerily like the one before. Twitter again took the top spot, for its detailed content policies and public data about moderation of user-generated content. Like most companies, however, it failed to report data about its advertising moderation (see the box for more). Amazon, despite a notable score increase, remained dead last, alongside Chinese behemoth Tencent. No company moved up or down more than one place in this year’s ranking, and as we’ve said so many times, none of them earned a passing grade. 

Google had the fewest improvements, and for the second year in a row, it was the only company that saw its overall score decline. It owes its drop to outdated policies on notifying search service users of content restrictions and encryption for Gmail and Google Drive. Google also stood out this year as the only U.S. company that did not engage with us during the research process, joining the three Chinese companies.

There were also some bright spots. For the third year in a row, digital platforms headquartered outside the U.S. have led year-over-year changes. Chinese companies Baidu and Tencent gained nearly 3 points this year. Russian search giant Yandex had the highest score change (7.6 points), thanks to policy improvements in all three categories: governance, freedom of expression, and privacy. It began publishing transparency reports that offer some insight into how it handles government demands to access user data, and for the first time since RDR started ranking the company in 2017, it disclosed a policy on handling data breaches. Russia’s war on Ukraine may erode these advances, as Russian companies contend with not only international sanctions but also pressure from their own regime to hew to the party line, by censoring content, suspending dissident accounts, and turning over personal data.

Eight companies improved their scores on governance and management oversight, as a result of instituting committees or other upper-management mechanisms to oversee the effects of company practices on freedom of expression and privacy. The highest score change on average came from improvements in disclosures on security practices, including limiting employee access to data and both internal and third-party security auditing. The largest decline on a single indicator came in the freedom of expression category. The average score on our standard that requires companies to notify users of content and account restrictions dropped four points. 

Let us not praise them too quickly: Twitter is still at the top of a failing class.

Twitter took the top spot for the second year in a row, earning 56 out of a possible 100 points on our scale, a three-point improvement over its score in 2020. It owes its rank to the strongest showing by far in our freedom of expression category, which focuses on the kinds of actions companies take to moderate and curate content, suspend and remove accounts, and respond to government and other third-party demands. It also reported more data about actions taken to enforce its own platform rules for user-generated content than any of its peers—an area where platforms are notoriously opaque. Notably, Twitter provided some information about how it deploys targeted advertising and algorithmic systems to curate, rank, and recommend content.

While it did outperform its Silicon Valley compatriots, Twitter still failed to earn a passing grade. It does not make a commitment to respecting human rights in its development and use of algorithms, and it provides little evidence that it conducts human rights due diligence to evaluate whether rights are at risk from government regulations where it operates. It discloses nothing about conducting due diligence on its use of targeted advertising or its own policy enforcement. Though it offers some information about its ad content policies and how it deploys targeted advertising, it reports no data about how it enforces those policies, making it difficult to hold it accountable.

Note: As we prepared to release the Big Tech Scorecard, Twitter accepted a USD $44 billion bid that will give control of the company to the world’s richest person, Elon Musk. What effect the change in ownership will have on the company is to be determined. Musk, who is also chief executive of Tesla and SpaceX, has proposed changes to the platform that include less content moderation, opening up algorithms, eliminating bots, and authenticating users. We will be following Twitter’s evolution closely and will report on changes affecting governance, freedom of expression, and privacy in the next Big Tech Scorecard.

U.S. companies still dominate the top half of our ranking, with Korea’s Kakao tying Apple for 6th place.

The order of the top seven companies in our ranking did not change since last year. All companies except Google, which declined slightly, made at least small net improvements to their policies affecting privacy and freedom of expression. Yahoo—formerly Verizon Media, and now the only company we rank that is not publicly traded since its acquisition by private equity firm Apollo Global Management—gained almost three points, thanks to improved security and data breach policies (see below for more on data breaches). 

Microsoft also disclosed more about content governance, releasing data for the first time on content it restricted based on its own rules. Its Bing search engine disclosed more data about how it moderates advertising content than any other service we ranked. But Microsoft’s score remained almost stagnant because its email service, Outlook, stopped receiving credit for encrypting user communication with unique keys. The document disclosing this practice became too old to use. The addition of LinkedIn, with its weak policies to safeguard freedom of expression, also put a damper on Microsoft’s total score.

As mentioned, Google, number four this year, was notable as the only company posting an overall decline, the result of fewer disclosures in both privacy and freedom of expression. It removed a commitment to notify users when they search for restricted content. The search giant also distinguished itself again this year as the only U.S. company that did not engage with RDR in our company feedback process. The company does, however, seek to engage with lawmakers in Brussels, earning the top spot in the number of lobbyists who have engaged with European Commission officials. It has used tactics so questionable that its CEO Sundar Pichai has apologized for them.

Though Meta (formerly Facebook) released a new human rights policy, it failed to commit fully to upholding international human rights in its development and use of algorithmic systems. The company also failed to make public how its algorithms moderate advertisements.

Meanwhile, Apple, historically stronger in privacy than freedom of expression in our ranking, expanded its reporting on content moderation and enforcement of its App Store rules.

Tying with Apple, Kakao is the only non-U.S. company in the top half of the Big Tech Scorecard. Following criticism over a rogue chatbot on the KakaoTalk messenger, it launched a board-level committee to oversee issues including privacy and freedom of expression. 

The Chinese and Russian companies we rank, along with Samsung and Amazon, round out the bottom half of the Scorecard—with some posting significant score improvements.

The Chinese companies were among the least transparent platforms, but still showed improvement, in part as a result of Beijing’s sweeping crackdown on the once freewheeling sector. To respond to the rapidly changing regulatory environment, both Baidu and Tencent provided more information about their governance processes, which led to notable score improvements in the governance category. In complying with China’s new Personal Information Protection Law (PIPL), Tencent, Baidu, and Alibaba provided users with the ability to opt out of the algorithmic recommendation systems of their major services. They made no other significant gains through legal compliance, because RDR’s human rights-based indicators set higher standards than those of the PIPL.

As in previous years, the Chinese companies kept silent about how they handle government requests and published only superficial data about content removed and accounts restricted. E-commerce platform Alibaba shared the least about its governance, and Baidu disclosed the least about its policies and practices affecting freedom of expression. None of three companies offered much about whether they conduct human rights due diligence.

Operating in an increasingly challenging political and regulatory landscape, Russian companies Yandex and VK continued to lack transparency on key issues affecting the rights of their users. Though they ranked eighth and ninth, respectively, Yandex outperformed VK in all categories, thanks to its stronger human rights commitment and internal mechanisms for implementing its commitments to freedom of expression and information and to privacy. It was also the most improved this year among all platforms. Both companies performed poorly on human rights due diligence. They each disclosed critical information on their security-related standards but shared very little about how they handle user information.

Smartphone giant Samsung, whose Android mobile ecosystem we evaluate, continued to fall behind its Korean peer Kakao (and most other platforms). Both companies made commitments to respect users’ freedom of expression and privacy, but scored poorly on human rights due diligence. Notably, Samsung earns no credit on more indicators than any other company, and ranks last in our privacy category. It discloses nothing about how it handles third-party requests and publishes no data on such requests.

Kakao notably outperformed Samsung on freedom of expression, thanks to its transparency reporting on its rules enforcement, government demands, and private requests to restrict content and accounts. In the privacy category, both had strong disclosures about their data collection and sharing practices, but disclosed nothing on data inference. Kakao, however, was far more transparent about its handling of government demands for user information.

Despite a respectable four-point score improvement, Amazon once again came in last, tying with Tencent. The e-commerce company bolstered its management oversight of data privacy, and disclosed that it conducts security audits for its Alexa voice assistant, but still fell far short of our standards in all three categories. Like its Chinese counterparts, it published no information about government demands it received to restrict content and accounts. 

Outside the U.S.: How Jurisdiction Affects Disclosures

Human rights are universal, so we evaluate all companies in our ranking without regard for the legal jurisdiction in which they operate. We do, however, conduct jurisdictional analysis to identify what factors may limit or prevent companies from meeting certain standards we set—but we do not reflect our analysis in a company’s score. 

Companies are stonewalling their users when it comes to how they develop and deploy algorithmic systems and infer data.  

In our last ranking, we debuted new standards for disclosures about the development and deployment of both algorithmic and targeted-advertising systems. No company scored well on these standards. In fact, their inclusion in the ranking brought most companies’ scores down. Our current ranking, then, offers the first chance for us to look at whether companies have made any progress on these indicators. The answer is a resounding no. 

To assess companies’ transparency in these areas, we ask whether they commit to human rights and to conducting human rights due diligence in the development of these systems and whether they make policies on algorithmic systems and targeted advertising available. Of all companies, only Microsoft earned any credit for providing access to algorithmic system development policies, the result of LinkedIn (new to our ranking this year) providing vague explanations of how it uses user data to develop machine learning models and how the company addresses bias in large-scale artificial intelligence (AI) applications.  

Companies did somewhat better in disclosing information about how they use algorithms to curate, recommend, and rank content, but in most cases they stop short of saying what kind of controls users have over them. Similarly, while some companies provide some information about how and what they infer about users from data they collect, no company discloses that it limits the data that can be inferred to what is necessary to provide the service offered.

Regardless of jurisdiction, companies earn the most credit on our security indicators, but policies on data breaches can be hard to find.

For 20 out of the 43 services we evaluated, we could not find any description of the company’s processes for responding to a data breach. In April 2021, more than 500 million Facebook users’ sensitive personal data, like email address or location information, was leaked. Yet the only information the company discloses related to data breaches pertains to its response to the Cambridge Analytica scandal. It says nothing about whether policies and practices are in place that can systematically address a data breach when it occurs. 

Meta was not the only company that failed its users on this issue. Microsoft lags, too. And both top-ranked Twitter and bottom-ranked Amazon, along with Google, VK, and Samsung, failed to provide any explanation of if and how they notify authorities or users about data breaches. Notably, two Chinese companies we rank, Alibaba and Baidu, earned the highest scores. 

Spotlighting Services: E-Commerce, Virtual Assistants, and LinkedIn 

Amazon’s rock-bottom score is an outlier among U.S. companies. To catch up, it must make transparency a priority, particularly around enforcement of its own rules. 

In 2020 we added e-commerce giants Amazon and Alibaba to our ranking. Despite being headquartered in very different legal and political environments, both companies placed at or near the bottom of our ranking. And not much has changed this year, leading us to ask why the e-commerce companies are scoring so poorly. Does it have to do with the nature of the services we’re evaluating—e-commerce and virtual assistants—or something else? 

We reviewed the companies’ performance across our categories and noted that both ranked at (Alibaba) or near (Amazon) the bottom on governance, along with another Chinese company, Tencent. We then looked more closely at our evaluations of each company’s e-commerce services, Amazon.com and Taobao.com. Amazon edges out Taobao on governance but lags behind the Chinese service in our freedom of expression and privacy categories. We noted a similar pattern in reviewing the scores for virtual assistants: Amazon’s Alexa outpaces Alibaba’s Aligenie but trails Apple’s Siri and Google Assistant. In every case, we noted that while Alibaba’s and its services scores are aligned with those of its peers in China and other jurisdictions outside the U.S., Amazon is an outlier when compared to other U.S. companies, falling far behind them in all our categories. 

If e-commerce isn’t the cause of the lower scores, the question is, why doesn’t Amazon do better?

“Whether e-commerce, virtual assistants, or social media, the targeted advertising business model is still at the root of some of the internet’s worst human rights harms.”

Amazon.com scores lower on most of our standards than its U.S. peers, but one area stood out: censorship and content governance. The service did not share any information about how it responds to government demands or private requests to censor content or restrict accounts. It earned the lowest score (20%) among all platforms we rank on our standard which asks companies to explain their processes of enforcing their own content rules. 

Only Amazon can explain why it has neglected these policy areas. We do know it is not a matter of resources. In February this year, Amazon reported record profits and disclosed its advertising revenue—USD $31.2 billion in 2021—for the first time. Advertising is its third-largest source of income, after cloud computing and e-commerce. With its massive store of first-party data, the company is moving into the ad business, competing with ad-tech giants, like Meta and Google, who outscore Amazon on transparency around ad content and targeting policies, too. Whether e-commerce, virtual assistants, or social media, the targeted advertising business model is still at the root of some of the internet’s worst human rights harms.

Apple’s and Google’s virtual assistants outperform Amazon’s and Alibaba’s in our ranking but they do not disclose how their algorithms work.     

Last year was also the first time we evaluated virtual assistants Alexa (Amazon) and AliGenie (Alibaba), which we then called “personal digital assistants.” This year, we added Apple’s Siri and Google Assistant to the ranking, and the results mirrored our overall findings for their parent companies. Siri and Google Assistant were significantly more transparent than Alexa or AliGenie, even after their improved performances this year. AliGenie disclosed more about how it uses algorithmic systems to moderate content, and Alexa provided new information about its use of user data to train its algorithms. Only AliGenie disclosed anything about how it uses its vast troves of information about users’ interests, hobbies and browsing habits to algorithmically recommend content, and products.

Virtual assistants are a growing concern for privacy and for freedom of expression and information, all the more so because companies disclose so little about how they work. Most virtual assistants need to listen constantly for a “wake word,” meaning they sometimes record things they should not. In a U.S. class action lawsuit, plaintiffs accused Amazon’s Alexa of accidentally recording their conversations without being prompted, and Apple faced a similar suit regarding Siri. We know very little about what companies do with that data, though we can safely assume that the end goal is financial profit. And when virtual assistants are connected to “smart homes,” their opaque AI risks providing a vulnerable entry point for hackers. 

The voice interface also changes how users interact with information in subtle ways, with potentially far-reaching consequences for users’ rights. Virtual assistants return queries with a single answer, which users tend to interpret as empirical truth. In contrast, text-based search engines provide multiple results, so users can easily see that their question has many possible answers. This single answer is concerning from multiple perspectives, including competition and safety, particularly for virtual assistant systems that offer third-party applications.

LinkedIn offered some explanation of how its algorithms work but failed to disclose much information on content removals and third-party requests. 

This year, we evaluated Microsoft’s LinkedIn platform for the first time. Because it connects employers to potential employees, the decisions its algorithms make when organizing content can directly impact users’ livelihoods. It was one of the only services to offer some information about how it processes user data to develop machine learning models and how the company addresses AI bias. 

It was the latest U.S.-based social media platform to pull out of the Chinese market, after the government forced it to censor activists and journalists in 2021. Though LinkedIn’s policies were more transparent and rights-respecting than the Chinese and Russian social media platforms we evaluated, it lagged behind U.S. peers Facebook and Twitter, as well as most of Microsoft’s other services. The platform, which has nearly a billion users, was particularly weak on freedom of expression, failing to fully explain the circumstances under which it removes content in line with internal rules or external requests, or the volume of these restrictions. 

Charting the Future of Big Tech Accountability

Whereas governments and their institutions have obligations to their citizens and to enacting mechanisms that ensure transparency in decision-making, companies are notoriously opaque. They argue that disclosing too much about their policies, revenues, and technologies can compromise their competitive position, and therefore jeopardize their shareholders’ returns.  

Pushing a company to change requires understanding how it operates first. This is what RDR’s research methodology is designed to do: to show whether a company incorporates consideration of fundamental human rights into its policies and practices.   With what we learn, we can then engage a range of actors to hold companies accountable for both what they say they do and for what we think they ought to do. Finally, we must connect our work to their driving force, their bottom lines. 

Over the last year, a growing group of civil society actors, including shareholders and institutional investors, whistleblowers, policymakers, and former employees, have scored big wins against Big Tech. Below we note three areas where momentum is building and with it, hope for change.

Companies’ transparency on human rights due diligence, a key to avoiding human rights harms, is exceptionally poor.  

Only Microsoft earns a perfect score on any of our human rights due diligence indicators—for its transparency on assessing the impact of government policies on freedom of expression and privacy. Yet it discloses much less about any due diligence it conducts on its own operations. This disparity is part of a broader trend: several companies—Apple, Tencent, Twitter, Yandex—published snippets of new information about how they assess the impacts of government policies on freedom of expression and privacy. But across the board, companies showed far less interest in examining the risks posed by their own products and business operations.

Take targeted advertising. Despite the clear human rights harms that stem from targeting systems, not a single company has announced a comprehensive human rights impact assessment of the mechanisms it uses to tailor ads to its users. In fact, none of the 14 global Big Tech platforms has improved in this area at all over last year. Meta’s civil rights audit remains the only assessment that comes close, its impact diluted by its limited scope, which covered only the potential discriminatory effects of targeting and which was centered on the U.S. while largely disregarding harms caused elsewhere. 

The same opacity permeates what companies say about the human rights due diligence that goes into developing and deploying algorithms. While regulatory efforts in the EU and China are afoot to harness unaccountable algorithms, a focus of growing scrutiny by shareholders, neither has yet compelled companies to expand their transparency on these fronts.

Still reckoning with the business model: to fix the internet, we must first fix online ads. 

For the second year in a row, none of the 14 companies we rank earned more than 50% of the possible points on our targeted advertising indicators. Companies typically have rules for ad content and for ad targeting, but independent research suggests that they sometimes do a bad job at enforcing these rules. If companies were more transparent about how they enforce their rules (and what technologies lie behind their processes), we would know more. But we know from our own research that among the industry’s leaders, there is virtually no transparency reporting about ad policy enforcement. (TikTok, which we don’t rank, discloses the raw number of rejected ads.)

Apple, a company whose public rhetoric vaunts its commitment to privacy, came in last, with 19.82%. This is hard to reconcile with its very public war on third-party tracking—and the surveillance advertising it enables—through its Ad Tracking Transparency program, which Meta says will cost the social media giant $10 billion in yearly ad revenue, even as Apple’s own ad revenue skyrockets. Our data underscores the fact that Apple needs to come clean about its own ad business.

Companies fared even worse on algorithmic transparency, where the highest score was Yahoo’s 22.45%. Algorithmic systems are the beating heart of the Big Tech business model: without automation, platforms cannot hope to achieve the global scale and market dominance that is key to their astronomic profits. Civil society, investors, policymakers, and the public all clamor for basic transparency about these systems that impact every facet of our lives, to no avail.

ESG investing and shareholder action are tying rights to risks—and companies’ bottom lines.

The human rights impacts of technology have become glaring enough to shake up the financial markets that give Big Tech companies life. Shareholders have emerged as a powerful voice in the push for corporate accountability in the tech sector—and often as powerful allies of the human rights community. But the chips are often still stacked against them in their efforts to press for change.

So far, 2022 has been a banner year for shareholder activism. As of February, members of the Interfaith Center on Corporate Responsibility, a major coalition of shareholders and allied organizations seeking to promote responsible corporate behavior, had filed more than 400 shareholder proposals for the 2022 proxy season. Retail and institutional investors with stock in the targeted companies will vote on many of these proposals at each company’s annual meeting.

A great deal of this momentum has been fueled by the meteoric rise of ESG (environmental, social, and governance) investing. ESG investors seek to determine how well companies are fulfilling their responsibilities as stewards of social and environmental good. But without a strong foundation, ESG can easily turn into ethics-washing. That is why investors are coalescing around human rights standards as the ideal benchmark with which to assess the risks to society that companies generate or enable. Among shareholders, RDR’s standards are becoming widely shared criteria for evaluating tech companies’ transparency and for pinpointing cases where their declarations deviate from their deeds.

Case in point: last year, investors representing nearly USD $6 trillion in assets signed an Investor Statement on Corporate Accountability for Digital Rights challenging companies ranked by RDR to make specific improvements identified by our team. This was the capstone to our long-standing collaboration with the Investor Alliance for Human Rights, which rallied investors to the cause. Since then, new signatories have stepped up and the value of their collective holdings has increased to $9 trillion. 

This year we also broke new ground by directly supporting shareholder resolutions at Alphabet and Meta challenging the two tech titans to assess the human rights impacts of their targeted advertising systems. The resolution aimed at Meta will be one of a dozen that the company’s shareholders will vote on at the end of May. Investors have continued to cite our standards in resolutions targeting Apple and Twitter this proxy season; dozens more align with our standards. In particular, resolutions demanding that companies conduct human rights impact assessments and improve transparency reporting have catapulted to the mainstream, bending juggernauts such as Microsoft to make new commitments and winning the support of a critical mass of shareholders at Apple.

But the barriers to effective shareholder action on human rights remain enormous. Chief among them are multi-class stock structures. Employed by tech giants like Alphabet, Meta, and Snap, these structures concentrate power in the hands of a small clique of founders and insiders by granting them inflated voting power relative to ordinary shareholders. Ultimately, this gives company leadership the ability to deflect calls for accountability, even if those calls enjoy overwhelming support. Multi-class structures are hallmarks of poor corporate governance, entrenching unaccountable leadership, disenfranchising shareholders, and shifting the risks of a company’s dereliction onto the public. For all these reasons, RDR advocates for dismantling multi-class structures and reversing a set of rules that further stifle shareholders’ ability to hold companies accountable in the U.S. 

Broad neglect of human rights due diligence by U.S. tech giants continues to exacerbate harms in the majority of the world, which despite comprising the largest proportion of users by far receives the fewest resources for trust and safety.

When Russia invaded Ukraine in February, tech companies were swift to respond. Among other measures, Meta devoted extra staffing to content review, YouTube blocked “Russian state-funded media,’’ and Apple disabled traffic and live incidents features for its maps application in Ukraine. This stands in striking contrast with companies’ inaction and slow responses to crises in other parts of the world, such as incitement to violence and hate speech in Ethiopia’s ongoing conflict. 

This inconsistency is exacerbated by two factors. First, human rights due diligence on operations and implementation of policies in the Global South are lacking and conducted mainly after the fact, as in the case with Facebook’s long-overdue decision to conduct a human rights-based assessment of its impacts in Palestine after the period of escalated violence that took place in May and June of 2021. Second, there is an unequal allocation of resources for content moderation outside the U.S. and Western Europe. For instance, according to the Facebook Files, Meta allocates 87% of its budget for combating misinformation to issues and users based in the U.S.

With the exception of Amazon, all U.S. tech companies ranked by RDR had relatively strong commitments to human rights, disclosed governance practices and management oversight over these issues, and had in place employee training and whistleblower programs to implement their human rights commitments. Yet, without strong human rights due diligence and more equal distribution of resources for content moderation across the world, users in the Global South will continue to bear the brunt of inconsistent implementation of tech company policies.

Companies are engaging more with civil society and investors but ignoring the need for engagement on algorithms and ads, and they are neglecting users’ rights to remedy.

Faced with mounting pressure from policymakers seeking to regulate them, shareholders concerned about the material risks stemming from their governance and operations, advertisers worried about brand safety, and workers fed up with feeling complicit in their harms, companies are gradually stepping up their engagement with stakeholders, including RDR. 

Each year, as a part of our research methodology, we offer companies an opportunity to review our preliminary results and make arguments—supported by evidence that meets our criteria—that they should earn credit where we saw none. Platforms are increasingly providing constructive feedback on these results, recognizing that it can lead to an improvement in their scores.

This year, every platform we rank except the Chinese ones and, perplexingly, Google offered such feedback. While Alibaba, Baidu, and Tencent have fewer incentives to engage with the human rights community, Google’s lack of input is an anomaly among U.S. platforms for which we have no explanation. It is also deeply concerning, given the power the company has to shape our information environment through its dominant search and advertising services.

Our standards set a bar for companies to regularly discuss freedom of expression and privacy with a range of interlocutors. We ask whether a company participates in a multistakeholder initiative with a credible assessment mechanism, like the Global Network Initiative (GNI), or whether it discloses any other kind of systematic engagement with non-industry, non-governmental stakeholders. This year, no company earned more than half credit, and six companies, all based outside the U.S., earned nothing at all. 

Users have a right to contest decisions made about their content and accounts. Every digital platform should maintain open channels through which users can voice their concerns and seek remedy when a platform causes harm, without special treatment for VIPs. Yet our data shows that companies are still failing to prioritize remedy mechanisms. 

Although this year did not feature the kind of dramatic collapse that disabled Meta’s content moderation appeals system, companies’ remedy policies largely stagnated. Since our last round of evaluations, when we broke out an indicator on content moderation appeals, we have noted virtually no improvements in this area. Companies are still tight-lipped about whether they notify users whose content is restricted, how long it takes to address an appeal, and what role human reviewers and algorithms play in the process.

This lack of attention to upgrading remedy policies is especially disappointing at a time when human rights actors are increasingly coalescing around the importance of these disclosures. It is galling enough in times of peace, but armed conflict rendered millions of voices voiceless, from Afghanistan to Myanmar to Ukraine. Under these conditions, reliable grievance channels are of paramount importance.

 

Return to the Scorecard

Sign up for the RADAR

Subscribe to our newsletter to stay in touch!