By Zak Rogoff & Nathalie Maréchal

Identifying content moderation solutions that protect users’ rights to free expression and privacy is one of the toughest challenges we face in the digital era. Around the world, digital platforms are getting due scrutiny of their content moderation practices from lawmakers and civil society alike, but these actors often disagree on how companies might do better. They also routinely fail to consult the people and communities most affected by companies’ failures to moderate content fairly.

But there is agreement in some areas, like on the need for more transparency. Indeed, there is a growing global consensus that companies should be much more transparent and accountable about their processes for creating and enforcing content rules than they are at present.

Today, we’re excited to join colleagues from around the world for the launch of the second edition of the Santa Clara Principles on Transparency and Accountability in Content Moderation, a civil society initiative to provide clear, human rights-based transparency guidelines for digital platforms.

Launched in 2018, the original Santa Clara Principles laid out essential transparency practices that companies could adopt in order to enable stronger accountability around their content moderation practices. The second edition of the principles builds on this work by acknowledging the particular challenges that companies must confront around the world, and by explicitly extending the principles to apply to paid online content, including targeted advertising.

To do this work, Ranking Digital Rights joined more than a dozen civil society organizations to seek feedback on the original set of principles from a range of stakeholders from around the world, to ensure the revised edition would reflect the needs of the diverse and growing body of people who use digital platforms. Our goal was to share our expertise in human rights benchmarking and encourage the coalition to publish standards that align with our standards on governance and freedom of expression, which we have used to evaluate the world’s most powerful tech and telecom companies since 2015.

In particular, we made the case that when it comes to targeted advertising, companies should be held to equal or higher levels of scrutiny and transparency as with the moderation of user-generated content. Beyond protecting the freedom of expression of advertisers themselves, this will help digital platforms take steps to prevent advertising that discriminates or that misleads, harasses, or otherwise interferes with users’ freedom of expression and information rights.

Independent research and reporting has shown that platforms do not adequately enforce national advertising laws, and that they sometimes even violate their own consumer protection-oriented rules. Transparency reporting is a necessary first step toward accountability in this area. Since our 2020 methodology revision, RDR’s indicators have advanced clear standards for advertising transparency that have influenced this and other important policy advocacy efforts.

Read the revised Santa Clara Principles.

Edzell garden

Edzell Castle, Angus, Scotland. Photo by John Oldenbuck via Wikimedia Commons. CC BY-SA 3.0

This is the RADAR, Ranking Digital Rights’ newsletter. This special edition was sent on November 16, 2021. Subscribe here to get The RADAR by email.

This season at RDR, we’ve done some deep thinking on one of the fastest-changing aspects of the industry we study: advertising models.

We’ve been waiting and watching to see how things change for Apple and the ad ecosystem around it, following the April 2021 rollout of its App Tracking Transparency(ATT) program, which requires app developers to get user consent before tracking them across the web. Was this move really driven by Apple’s commitment to privacy? Or did it have more to do with the company’s desire to edge out its biggest competitors in the digital ad space?

As of last month, the verdict is in: The majority of iOS users are not opting into third-party tracking — and Apple’s ad business has more than tripled its market share since April 2021, according to the Financial Times. FT also reported that ad revenues for major third-party app companies like Facebook and SnapChat have dropped by as much as 13 percent in what appears to be a result of the change.

Then there’s Google. Privacy nerds have heard about Google’s forthcoming “FLoC” system, which will move Chrome users away from third-party cookies and towards a “cohort-based” tracking model that the company says will be better for people’s privacy. But some are skeptical as to how much this program will really protect people’s privacy and security. In late October, we found more reason to worry when a federal judge in New York unsealed the amended 2020 antitrust suit filed against Google by 16 state attorneys general plus the AG of Puerto Rico, who together allege that this initiative is almost entirely profit-driven.

The suit lays out a litany of accusations that the company has engineered a quasi-monopoly over digital advertising markets, colluded with Facebook to control the market, and engaged in a host of related deceptive practices. On the FLoC front, it cites internal company documents indicating that Google’s so-called “Privacy Sandbox” (the origin of the FLoC system) was originally dubbed “Project NERA” and that it was intended to “successfully mimic a walled garden” in what a staffer described as an effort to “protect our margins.” RDR’s Aliya Bhatia and Ellery Biddle wrote about it this week in Tech Policy Press.

Although both Google and Apple say that they’re making these changes in order to better protect user privacy, the profit motives are clear, present, and enormous. While the changes may whittle away at the troves of data that so many digital companies have on us, they also will help to consolidate our digital dossiers in the hands of a few uniquely powerful platforms, and reduce or even eliminate many of the smaller players in the ecosystem. If we’re really moving to a paradigm where first-party tracking dominates the sector, we have to ask: How might this shift affect people’s rights and matters of public interest? RDR’s Ellery Biddle and Veszna Wessenauer dug into this in our latest blog post.
Read the post here →

RDR MEDIA HITS
Washington Post: The Facebook Files have put Meta’s controversial news feed ranking system back in the spotlight, causing some lawmakers to suggest that people should be able to use platforms like Facebook without having to submit to their recommendation algorithms. Speaking about the issue with the Washington Post, RDR’s Nathalie Maréchal said, “I think users have the right to expect social media experiences free of recommendation algorithms.” She also noted that while Meta’s research on chronological feeds may be compelling, it should be taken with a grain of salt: “…as talented as industry researchers are, we can’t trust executives to make decisions in the public interest based on that [internal] research,” she said. Read via Washington Post.

CBS News: When Facebook (now Meta) announced plans to end its use of some facial recognition systems, many privacy advocates celebrated. But RDR’s Nathalie Maréchal urged caution about the purported change, noting that the announcement came amid policymakers criticizing the company for putting profit ahead of people’s rights. The company is “trying to sidestep the real and extremely important questions about its governance…and [its] transparency record,” she said to CBS News. Lo and behold, Meta announced last week that it will continue collecting and using biometric data in the metaverse. Read via CBS News.

EVENTS
The Internet Governance Forum | Best Practices in Content Moderation and Human Rights December 8 at 11:30am ET | Register here
RDR’s Veszna Wessenauer will participate in a session at IGF on the relationship between digital policy and the established international frameworks for civil and political rights as set out in the UDHR and ICCPR.

Edzell garden

Edzell Castle, Angus, Scotland. Photo by John Oldenbuck via Wikimedia Commons. CC BY-SA 3.0

By Veszna Wessenauer and Ellery Roberts Biddle

When Apple announced its plans to tighten restrictions on third-party tracking by app developers, privacy advocates—including us—were intrigued. The company seemed to be charting a new course for digital advertising that would give users much more power to decide whether or not advertisers could track and target them across the web. But we also wondered: What was in it for Apple?

Now we know. The company’s advertising business has more than tripled its market share since it rolled out the App Tracking Transparency (ATT) program in April 2021, which requires app developers to get user consent before tracking them across the web.

Apple has become so powerful that it has changed the rules of the game to its own benefit, and it is now effectively winning. The Financial Times reported in October that Apple’s ads now drive 58 percent of all downloads in the App Store, and more recently reported that ad revenues for major third-party app companies like Facebook and SnapChat have dropped by as much as 13 percent as a result.

It is clear that Apple’s move, alongside Google’s forthcoming transition to tracking people in “cohorts” rather than at the individual level, could shake up the uniquely opaque (but almost certainly icky) underworld of the internet that is ad tech. Every second we spend online, advertisers hawking everything from prescription drugs to political candidates compete for our attention. Internet companies use the ever-growing troves of information that they have about us, much of it gathered up with the use of third-party cookies, to sell ad slots to the highest bidder. Today there is a vast ecosystem of companies that carry out this particular function of using our data to enable targeted advertising. But now two of the industry’s biggest companies are shifting away from this model, albeit in different formats.

Although both companies say that they’re making these changes in order to better protect user privacy, the profit motives are clear, present, and enormous. While the changes may whittle away at the troves of data that so many digital companies have on us, they also will help to consolidate our digital dossiers in the hands of a few uniquely powerful platforms, and reduce or even eliminate many of the smaller players in the ecosystem.

If we’re really moving to a paradigm where first-party tracking dominates the sector, we have to ask: How might this shift affect people’s rights and matters of public interest? We know a lot about how these systems will affect people’s privacy, but what about other fundamental rights, like the right to information or non-discrimination?

Third-party tracking is now tied to some of the most insidious and harmful targeting practices around. With the help of a massive amount of third-party data—collected from third-party websites or apps through technical means such as cookies, plug-ins, or widgets—advertising can be hyper-personalized and tailored to consumer segments or even individuals. Political campaigns can target us to the point that they can swing an election, or tell us to go vote on the wrong day. Conspiracy theorists can capture vulnerable eyeballs and convince people that COVID-19 is a hoax. But it’s not entirely clear that the move away from third-party tracking will change these dynamics.

We can only know how good or bad these moves really are for users’ rights, and for society at large, if we know what’s happening to our data, and if companies give us some ability to decide who gets it and how they can use it. Unfortunately, neither Apple nor Google (nor any of the companies we evaluate) have ever met our standards for these kinds of disclosures.

This season, we’ve been studying this impending shift, assessing the motivations that seem to be driving Apple and Google to make these changes, and comparing companies’ public statements about their plans to their actual policies on things like algorithms and ad targeting. We are using our own standards to inform our understanding of how these changes will affect users’ rights, and what human rights-centric questions we should be asking Google as it rolls out its new “FLoC” system.

Apple is getting creepy

In 2020, Apple’s announcement of the ATT plan triggered loud public criticism from Facebook (now Meta). Most users access Meta’s services via mobile devices, many of which are owned and operated by Apple. This makes Apple the gatekeeper for any application available to iPhone or iPad users, Meta included.

A very public tête-à-tête soon ensued, much of which stemmed from an open letter that we at RDR wrote to Apple, pressing the company to roll out these changes on schedule in the name of increasing user control and privacy.
In response to our letter, Apple Global Privacy Lead Jane Horvath wrote that “tracking can be invasive and even creepy.” She singled out Meta, saying that the company had “made clear that their intent is to collect as much data as possible across both first- and third-party products to develop and monetize detailed profiles of their users.”
We stand by our original position, which was rooted in our commitment to user privacy and control. But we don’t want to see these things come at the expense of competition.

With the new system in place and its newly dominant position in the ad market, we have to ask: What if Apple engages in similarly “creepy” practices by exploiting the boatloads of first-party data it has on its users? It is worth noting that while Apple now requires developers to explicitly capture user consent for tracking (via “opting in”), Apple users are subject to a separate set of rules about how Apple collects and uses their data. If they want to use Apple’s products, they have no choice but to agree. Also, recent research by the Washington Post and Lockdown suggests that some iPhone apps are still tracking people via fingerprinting on iOS, even when they’ve opted out.

The public face-off between the companies helped to clarify what actual motivations may have driven the change on Apple’s part. The changes put the company in an even more powerful position to capture, make inferences about, and monetize our data. If its ad revenues since the change was implemented are any indication, the plan is working.
Apple has published policies acknowledging that it engages in targeted advertising. But there’s a lot missing from the company’s public policies and disclosures about how it treats our data.

 

  • Apple has published no public documentation explaining whether or how it conducts user data inference, a key ingredient in monetization of user data.
  • Apple discloses nothing about whether or not it collects user data from third parties via non-technical means.
  • Apple offers no evidence that it conducts human rights impact assessments on any of these activities.

When it comes to FLoC, what should we be asking Google?

Although it won’t debut until 2023, we have some details about Google’s “Federated Learning of Cohorts” aka FLoC, a project of Google’s so-called Privacy Sandbox initiative. The company describes the system as “a new approach to interest-based advertising that both improves privacy and gives publishers a tool they need for viable advertising business models.” What the company doesn’t say is that this new paradigm may actually shut out other advertising approaches altogether.

From what Google has said so far, we know that FLoC will use algorithms to put users into groups that share preferences. The system will track those groups, rather than allowing each of us to be individually tracked across the web. Advertisers will be able to show ads to Chrome users based on these cohorts, which will contain a few thousand people each. The cohorts will be updated weekly, to make sure that the targeting is still relevant and to reduce the possibility of users becoming identifiable at the individual level.

The Electronic Frontier Foundation’s Bennett Cyphers has noted that this weekly update will make FLoC cohorts “less useful as long-term identifiers, but it also [will make] them more potent measures of how users behave over time.” It is also worth noting that the system will also make it much easier to effectively use browser fingerprinting techniques that do enable individual-level targeting.

Learn more about FLoC with these explainers from EFF and RestorePrivacy.

It is important to understand that Google is not actually moving away from a targeted advertising business model. All we really know at this stage is that FLoC will constitute a move towards a paradigm where fingerprinting technology becomes much more powerful and possible to deploy, and where signature tracking techniques are algorithmically driven. If it’s anything like Google Search, or the company’s other products, we can expect to find very little public information on how these algorithms are built or deployed.

We also expect that it will become even more difficult to audit and hold the company accountable than was the case with cookies, which are easy to test for privacy violations. Google has made big promises about supporting and building a more open web. But from where we’re standing, FLoC looks like a new variation on the walled garden.

In fact, documents that were recently unsealed in a massive antitrust suit filed against Google charge that this is all an effort to shore up power in the online advertising market. The suit cites internal company documents saying that Project NERA, the precursor to the Privacy Sandbox, was meant to “successfully mimic a walled garden across the open web [so] we can protect our margins.” The unsealed documents also suggest that the “Privacy Sandbox” name and branding were rolled out in order to reframe the changes using privacy language, and to deflect public scrutiny. The court filings also provide a lot of support for the idea that Google’s main constituency here is advertisers, not users.

Will this really work? Does Google have enough data about us for this to be effective? In short, yes. Google can afford to shift to a system like FLoC precisely because of its monopoly status in the browser market alongside other key markets. Thanks to its preponderance of services—Chrome Browser, Gmail, Google Drive, Google Maps, and, of course, Android—the company has access to incredibly rich and sensitive user data at scale, second to no other company outside China. While Google’s business model relies heavily on advertising, it does not need to rely on third-party data in order to be an effective seller of ad space. With this transition, it could effectively cut out the third-party ad sellers altogether.
It’s also important to consider how this change will affect the broader market. We’re moving from a diverse (if unsavory) array of players in the ad tech underworld, to a paradigm that will concentrate profit and power in the hands of a powerful few. Google controls over two-thirds of the global web browser market. Once the Chrome browser starts blocking third-party cookies, most internet users will be using browsers without third-party cookies.
Although it will probably bring some benefits for users, the change is clearly bad news for many of the actors in the ad tech ecosystem that heavily rely on third-party data and for ad tech firms selling and buying this data. For firms that are not able to collect data on users (in the ways that Google, Apple, or Facebook can) the end of third-party cookies will either snuff out or force radical changes for their business models.

Here are our key questions for Google:

Will users be able to see what groups they belong to and on what grounds under FLoC? Google should make it clear to users what controls they have over their information and preferences under FLoC.

How will Google identify and address human rights risks in its development and implementation of FLoC? Beyond privacy, targeted advertising can pose risks to other rights, like rights of access to information or non-discrimination. If the company identifies problems in these areas, how will it address them?

Will Google stop collecting third-party data on its users through non-technical means when it starts blocking third-party cookies through its browser? Companies may acquire user information from third parties as part of a non-technical, contractual agreement as well. For example, Bloomberg reported in 2018 that Google buys credit card information from Mastercard in order to track which users buy a product that was marketed to them through targeted advertising. Such contractually acquired data can become an integral part of the digital dossier that a company holds on its users and it can form the basis for inferred user information.

Most companies say nothing about whether and how they acquire data through contractual agreements, we found in the 2020 RDR Index.

None of the companies disclosed what user information they collect from third parties through non-technical means.

Data from Indicator P9 in the 2020 RDR Index.

As these companies consolidate power over our data, what should digital rights advocates focus on?

The fact that Google and Apple—both of which have made public commitments to human rights—are trying to position themselves as champions of privacy due to the changes they introduced or are planning to introduce raises questions about whether these companies consider other risks associated with targeted advertising beyond privacy.
In the 2020 RDR Index we introduced standards on targeted advertising and algorithmic systems to address harms stemming from companies’ business models. None of the digital platforms we ranked in 2020 assess privacy or freedom of expression risks associated with their targeted advertising policies and practices. Facebook was the only company that provided some information on how it assesses discrimination risks associated with its targeted advertising practices, and this was limited in scope.

When we think of some of the long-term societal effects of targeted advertising, like disinformation around elections and matters of public health, these questions must be part of the equation. People need and deserve to have accurate information about how to protect their health in a pandemic. But we know from independent research and reporting that targeted ads have had an adverse impact on people’s ability to access such information. When it comes to elections, jobs, housing, and other fundamental parts of people’s lives, we also know that Big Tech companies have enabled advertising that discriminates on the basis of race, gender, and other protected characteristics. This is equally harmful. In some cases, it is a violation of U.S. law.

Will the move away from third-party cookies mean the end of tracking and targeting? Not likely. User data is still seen as an essential way to generate added value for digital platforms. Companies like Google and Facebook are digital gatekeepers and have their own walled gardens of (first-party) user data that no one else can see. Google claims that with the introduction of FLoC it will not be possible to target individuals anymore, but it is unclear whether and how it will process and infer users’ browser data to allocate them into cohorts.

None of the companies in the 2020 RDR Index provided clear information on their data inference policies and practices.

Companies disclosed nothing about the selected indicators.

Data from Indicators P7 and P3b in the 2020 RDR Index.

Are any of these changes going to alter company business models to better align with the public interest? In the case of Google, Chrome users will no longer have to contend with the opacity of third-party tracking. Rather than wondering what third parties might have their data, and how they’re using it, they will know that most of their data sits with Google.

But without more transparency from the company, it will be just as impossible to find out how Google uses our data, and how our data might serve advertisers seeking to do things like swing an election or promote anti-vaccine propaganda. The same will be true for Apple. Until both companies are forced to put this information out for public view, we will have about as little knowledge of (or control over) how our information is being used as we do now.

London street art. Photo by Annie Spratt. Free to use under Unsplash license.

This is the RADAR, Ranking Digital Rights’ newsletter. This special edition was sent on October 21, 2021. Subscribe here to get The RADAR by email.

Since the Wall Street Journal’s release of the Facebook Files and the subsequent debut of whistleblower Frances Haugen in the public conversation, we’ve seen a lot of pushback from Facebook. Company executives have claimed that Haugen didn’t have sufficient knowledge about the practices she brought to light, argued that the WSJ series “mischaracterized” Facebook’s approach, and attacked a network of journalists working on a series of follow-up reports drawing on the documents.

The company can obfuscate and deflect as it wishes, but the data Facebook is willing to release—and that which it keeps private—speaks for itself. Companies often wax poetic about the social and commercial benefits that they bring to people and businesses, but when it comes to their concrete effects on people’s lives and rights, policies and practices are what actually count. That is what RDR is here to measure. Although we have a strong focus on company policies, which establish a baseline for what they say they will do, we also ask companies to publish concrete evidence of their practices, with things like transparency reports.

Last week, we “cross-checked” Facebook, comparing company statements and policies with the Haugen revelations, and with our own data and findings since 2015. Again and again, we see that in areas where Facebook is most opaque about its practices, such as targeted advertising and use of algorithms to enforce ad content policies, the hard evidence laid out by Haugen and other whistleblowers like Sophie Zhang paint a troubling picture of how the company treats its users. As Haugen told the U.S. Congress a few weeks ago, profits do take priority over the public interest at Facebook.

Read “Cross-checking the Facebook files” →

If Facebook’s decisions are mainly driven by profit, then we need to follow the money. Facebook’s earnings reports show that at least 98% of the company’s revenue comes from advertising, and we know that ad sales on Facebook are driven by the company’s vast data collection machine. That’s why we’ve joined Fight for the Future’s call on Congress to pass federal privacy legislation. We hope our friends and allies will consider doing the same.

See our 2020 report card for Facebook →

RDR’s 2020 encryption scores for digital platforms. See full results.

State and corporate eyes are still watching us. So let’s encrypt!

Happy Global Encryption Day! At RDR, we push companies to encrypt user communications and private content so that users can control who has access to them. In our 2020 research, we found that some of the world’s biggest companies still have a very long way to go on encryption.

Since 2015, we’ve evaluated companies’ use of encryption by looking for evidence that they encrypt the transmission of user communications by default and using unique keys. We also look for evidence that the company allows users to secure their private content using end-to-end encryption, or full-disk encryption (where applicable), and ask if these things are enabled by default. The chart above shows digital platforms’ scores on our encryption indicator from 2020.

We observed a steep decline in encryption standards for the Russian companies that we evaluate, Yandex and Mail.Ru, owing to proposed regulations that would limit its use. While Mail.Ru (owner of VKontakte) never had especially strong practices in this area, search engine leader Yandex distinguished itself on encryption in years past, out-performing Google, Facebook, and Microsoft as recently as 2019.

Of course private companies like the ones we rank are only part of the equation. Companies specializing in surveillance software continue to reap huge profits from sales to government agencies that target legitimate criminal activity, but also people like activists and journalists who are working to hold their governments to account. Thanks to years of research by groups like The Citizen Lab and Amnesty International, and the more recent revelations around the broad-based use of NSO Group’s Pegasus software, there is more hard technical evidence in the public domain than ever before of how these technologies are used and who they harm.

This week, we are proud to support a letter to the U.N. Human Rights Council pushing members to mandate independent investigations of the sale, export, transfer, and use of surveillance technology like Pegasus. We also join civil society groups around the world, in a campaign organized by the Internet Society, to call on both governments and the private sector to enhance, strengthen, and promote use of strong encryption to protect people everywhere.

Global investors are calling on tech companies to implement our recommendations

A group of global investors with more than $6T in assets called on the 26 tech and telecom companies we ranked in the last RDR Corporate Accountability Index to commit to some of our high-level recommendations. In concert with our report, the Investor Alliance for Human Rights brought together nearly 80 investor firms to support this effort. The group calls on companies to:

  • implement robust human rights governance;
  • maximize transparency on how policies are implemented;
  • give users meaningful control over their data and data inferred about them;
  • and account for harms that stem from algorithms and targeted advertising.

RDR Media Hits

Tech Policy Press: Will creating third-party recommender systems or “middleware” solve content problems on Facebook? At a recent symposium hosted by Tech Policy Press, featuring Daphne Keller, Francis Fukuyama, and moderated by Richard Reisman, RDR Senior Policy and Partnerships Manager Nathalie Maréchal explained why she’s not convinced. Alongside the numerous privacy-protection pitfalls with third-party recommender systems, this solution doesn’t address the core issue at hand: the surveillance capitalism business model. Read the transcript at Tech Policy Press.

MIT Tech Review: RDR Projects Director Ellery Biddle spoke with the Tech Review’s Karen Hao about the viability of Facebook whistleblower Frances Haugen’s proposal to regulate algorithms by creating a carve-out in Section 230 of the Communications Decency Act. In short, she says we’ll need a lot more transparency around algorithms before we can look to solutions like this one. Read via MIT Tech Review.

The Logic: The Government of Canada’s proposed online harms bill is “unworkable,” according to RDR’s Maréchal. She offered key points from RDR’s comments on the bill, in an interview with The Logic, a Canadian publication covering the innovation economy. Read via The Logic (paywalled).

National Journal: Maréchal also spoke with the National Journal to push back on Rep. Pallone’s proposed bill to reform Section 230, saying that the bill “falls into the same trap of all the other well-intentioned 230 bills.” Pointing to the experience of sex workers in the wake of SESTA/FOSTA carve-outs, Maréchal asserted that the carve-outs often lead to companies erring on the side of mass removals of content posted by users, forcing marginalized individuals off the internet. Read via National Journal.

Events

UCLA Institute for Technology, Law & Policy | Power and Accountability in Tech
November 5 at 4:00 PM ET | Register here

RDR Director Jessica Dheere joins UCLA’s week-long conference examining corporate power, multi-stakeholder engagement, and solutions to uphold human rights. Jessica will speak on a panel alongside Nandini Jammi, co-founder of Check My Ads; Lilly Irani, associate professor of Communication and Science Studies at UC San Diego; and Isedua Oribhabor, business and human rights lead at Access Now.

UCLA Institute for Technology, Law & Policy | Transparency and Corporate Social Responsibility
November 17 at 3:00 PM ET | Register here

RDR Senior Policy and Partnerships Manager Nathalie Maréchal will join UCLA professor Lynn M. LoPucki and SASB Standards Associate Director of Research Greg Waters to discuss the importance of transparency for accountable corporate governance in the tech sector.

A global group of investors with more than $6 trillion in assets have sent letters calling on the 26 tech and telecom companies we ranked in the last RDR Corporate Accountability Index to commit to our core recommendations. We push companies to:

  • implement robust human rights governance
  • maximize transparency on how policies are implemented
  • give users meaningful control over their data and data inferred about them
  • account for harms that stem from algorithms and targeted advertising

Coordinated by the Investor Alliance for Human Rights, the campaign comprises nearly 80 investment groups who are applying pressure on technology companies to resolve these long-standing, systemic issues. The significant increase in support for this statement relative to previous years signals an increased desire among investors for good corporate governance and respect for human rights within the tech sector. The investor groups urged companies to implement key corporate governance measures that we at RDR have long pushed for, including strengthened oversight structures to prevent companies from causing or enabling human rights violations.

Ranking Digital Rights is proud to support the Investor Statement on Corporate Accountability for Digital Rights and investors’ direct engagement with some of the largest ICT companies in the world. Through our annual Corporate Accountability Index, we equip investors and advocates alike with the data and analysis they need to draft and promote shareholder resolutions that put human rights first.

Read the Investor Statement

We invite investors and asset managers seeking guidance on the human rights risks of technology companies to reach out to us at investors@rankingdigitalrights.org.