Traffic in downtown Shanghai.

Traffic in downtown Shanghai. Photo by Nicholas Hartmann via Wikimedia Commons (CC BY-SA 4.0)

When the government of China put out a draft regulation on algorithms in August, it broke ground at a global scale. The draft laid out rules and standards for tech platform recommendation algorithms like no other government has. And it surprised some, especially Western onlookers, by introducing a handful of reasonable protections for users’ rights and interests.

The draft requires companies to be more transparent about their algorithmic systems and to allow users to opt out of such systems. It addresses tech platform addiction and it seeks some protections for people working in the platform-based gig economy (such as delivery workers). It also compels tech platforms to enforce “mainstream” (i.e., Chinese Communist Party) values.

People who have been watching the evolution of China’s tech policy regime in recent years saw the draft as a reflection of the major interests that the Chinese government and Communist Party have been working to balance: tech power on one hand, and public pressure on the other.

China is notorious for its digital censorship and public surveillance systems. But the state is not the only entity that poses a threat to Chinese people’s human rights. Until recently, Chinese tech companies were both enabling state efforts to control information and surveil the public, and reaping handsome profits by collecting and monetizing people’s data. Over the past decade, just like their Silicon Valley counterparts, China’s tech giants have abused user data, ignored market regulation, and deployed exploitative recommendation systems. And people have noticed. Public frustration about these practices and their effects on society reached a fever pitch in 2020 when China saw a spike in fatal traffic accidents resulting from food delivery workers trying desperately to keep up with the algorithmically-generated delivery times issued by their tech platform employers.

Public harms like these don’t just reflect poorly on big tech companies. They lay bare the lack of control that the government has over such corporations. And they pose a threat to the predominant position of the Chinese Communist Party (CCP) in Chinese society.

In order to assert authority over these companies, and to maintain or even improve their image as entities that serve and protect the public, the CCP (which makes key decisions about China’s policy environment) and the government (which implements those decisions) have pushed through a raft of tech-focused regulations in recent years—the Cybercrime Law, the Personal Information Protection Law, and the Data Security Law—that seek to rein in companies’ data collection and monetization powers and, in some cases, to actually improve protections for the public.

These laws complicate narratives among media and policymakers in the West, who often portray China’s tech companies either as agents spreading Communist ideology and spying globally at the behest of Beijing, or as beacons of capitalism victimized by the Party’s relentless crackdowns that aim to show “who is the real boss.” There is some truth in each of these portrayals, but both fail to acknowledge the importance and rights of Chinese people. These lines of thinking also fail to account for the populist stance of the state.

Caught between the massive powers of the government on one hand, and tech companies on the other, Chinese users and their interests often get squeezed into a position where they have little sway. However, the three groups—the party, the public, and the tech powers—are intertwined and do interact with each other in a dynamic (if sometimes shifting) equilibrium.

This essay explores some critical questions about this dynamic: What is the real reason for the Chinese government’s regulatory crackdowns on tech companies? To what extent is the state trying to placate public complaints about tech giants? And most importantly: How do these things affect millions of users’ interests and rights?

Are these new laws really benefiting users?

Western media often focus on how China’s changing regulatory environment affects the operations and business models of Chinese tech companies, but leave users’ rights out of the picture. At Ranking Digital Rights, we put users’ rights at the center of our research. Over time, our evaluations of three of China’s leading tech giants—Alibaba, Baidu, and Tencent—have shown how China’s regulatory environment has brought some benefits for people’s rights to privacy and security, as well as control over their information, albeit only in areas unrelated to Chinese government surveillance.

China’s regulation of user data collection has undergone a sea change since the adoption of the 2017 Cybersecurity Law, which focused on security and cybercrime protections and established principles of “legality, propriety, and necessity” in user information collection. It was followed by the September 2021 Data Security Law, an effort to protect critical information infrastructure, and then by the Personal Information Protection Law (PIPL), which went into force on November 1. A sweeping data privacy law, PIPL defines personal information and sensitive information, compels data processors to obtain users’ consent prior to collecting their data, and requires that companies allow users to opt out of targeted ads. It also put a ban on automated decision-making that can cause price discrimination.

These laws do appear to have brought increased protections for users wanting more control over how tech platforms use and profit from their data. When we reviewed Chinese companies’ policies alongside those of 11 other globally dominant digital platforms, Baidu and Tencent were more transparent about how they collect user information than all the other platforms we rank, including Google, Apple, and Microsoft. Both Baidu and Tencent made explicit commitments to purpose limitation, vowing only to collect data that was needed to perform a given service. Alibaba fell behind major Korean companies Kakao and Samsung, but still outranked all the major U.S. platforms. In our July 2021 evaluation of ByteDance (parent company of TikTok), we found that Douyin (TikTok’s Chinese counterpart) far outpaced TikTok on these metrics, by committing to only collecting necessary information for the service.

Data from Indicator P3a from the 2020 RDR Index

Both Baidu and Tencent also have improved their privacy policy portals, making it easier for users to access privacy policies for various products in one place.

We also found that all three companies provided much more information about their contingency plans to handle data breaches than they had in the past, and more than other companies across the board. This change was likely inspired by the Cybersecurity Law, which requires companies to plan for potential data breaches.

Smaller improvements have emerged as well. On Alibaba’s Taobao, users can opt out of recommendations with a single click. The same is true for targeted ads. These and other updates give the impression that the platforms want to protect the rights of users and stand with the government at the same time.

The drawbacks

It may seem like a happy ending to the story. China’s regulatory environment is clearly more privacy-protective than it was in the past, even as state surveillance practices continue unabated. But even though China’s tech companies have made the right changes to their policies, there’s strong evidence that many of them are not following their own rules.

Tencent and ByteDance have been plagued with scandals and denounced by Beijing for violating the “necessity” rule laid out in the Cybersecurity Law. In May this year, the Cyberspace Administration of China (CAC), the country’s top internet regulator, publicly identified 105 popular apps that had illicitly collected user information and failed to provide options for users to delete or correct personal information. These apps included Baidu Browser and Baidu App, a “super app” interface for finding news, pictures, videos, and other content on mobile. Soon thereafter, Tencent’s mobile phone security app, which is meant to protect the privacy and security of users’ phones, was disciplined by the CAC for collecting “personal information irrelevant to the service it provides,” despite the promises in its policies. Douyin was caught “collecting personal information irrelevant to its service,” despite the fact that its privacy policy states that it only collects user information “necessary” to realize functions and services.

In June 2021, digital news aggregator apps, including Today’s Headline (operated by ByteDance), Tencent News, and Sina News, were publicly rebuked by the CAC for collecting user information irrelevant to the service, collecting user information without user permission, or both.

In August, the Ministry of Industry and Information Technology (MIIT) publicly declared that Tencent’s WeChat (China’s most popular app) had used “contact list and geolocation information illegally.”

Anecdotally, Chinese users have voiced concerns that mobile apps are eavesdropping on their daily conversations, sometimes even when the microphone function is turned off. These accusations have implicated apps ranging from food delivery platforms, to Tencent’s WeChat, to Alibaba’s Taobao. Though it’s hard to find solid evidence, technical tests show that such snooping practices are feasible. Some users shared their experiences under the relevant topics on Zhihu, a Quorum-like platform in China. Ironically, that platform too was accused of eavesdropping on users’ private conversations.

The latest public condemnation from the Chinese government was announced in November. MIIT ordered 38 apps, including two apps run by Tencent, Tencent news and QQ Music, to stop “collecting user information excessively.” Soon after, the Ministry ordered Tencent to submit any new updates of its apps for technical testing and approval to ensure they meet national privacy standards. The company’s apps have been publicly accused of illegally collecting user information by MIIT four times in 2021.

Although the Personal Information Protection Law requires tech companies to allow users to opt out of targeted advertising, the companies have turned this into a battle of wits. Baidu technically allows users to do this, but the company’s privacy policy does not include any information on where or how to actually opt out. While PIPL was still pending, both Alibaba and Tencent maintained options for users to turn off ad targeting (which is on by default), but made the selection time-limited, so that users would be reverted back to the default after six months. Tencent did not cancel the time limit until October 29, when the company was sued in a court of Shenzhen City (where Tencent is headquartered) for infringing user rights. The lawsuit included accusations regarding the time limit on opt-outs for ad targeting. Taobao updated its privacy policy and the setting requirement in a hurry on November 1, the day PIPL took effect.

The draft regulation on algorithms and the voices of Chinese users

As it is still at the drafting stage, we don’t yet know what will appear in the final text of China’s regulation on algorithms. But the draft has one very specific provision that appears to be a direct response to public concern. It requires labor platforms (such as food delivery services) to improve their job distribution, payment, rewards, and punishment systems to protect the rights of contract laborers.

In 2020, Chinese media outlet Renwu reported on how the algorithmic systems powering China’s largest food delivery platforms, including Ele.me (owned by Alibaba) and Meituan (backed by Tencent), were exploiting delivery workers and all but forcing them to violate traffic laws. To keep up with the apps’ algorithmically optimized delivery times, workers were exceeding speed limits, running stop lights, and endangering people’s lives. In August 2020 alone, the traffic police of Shenzhen City recorded 12,000 traffic violations related to delivery workers riding mopeds or converted bicycles. Shanghai City data showed that traffic accidents involving delivery workers caused five deaths and 324 injuries in the first half of 2019. The Renwu story (available here in English) resonated with people’s daily experience in the street immediately, eliciting tens of thousands of comments.

The public response could not be ignored. Although civilians are rarely able to influence or shape legislation in China, public safety has become an area in which they do have some sway. The Communist Party, though powerful, needs to respond to public complaints and tie this to its efforts to regulate tech companies. An important part of the Party’s legitimacy comes from the notion that it is “serving the people.”

Chinese President Xi Jinping has emphasized this point in recent statements for state media: “The development of the internet and information industry must implement the people-centered idea of development and take the improvement of people’s well-being as the starting point and foothold of informatization, to enable people to acquire more sense of contentment, happiness, and safety.”

Although the draft regulation on algorithms covers a much broader range of issues than just worker rights and safety, it suggests that public pressure can play a role in policymaking in China, when certain conditions intersect.

The future of Chinese users’ rights

In another kind of society, direct pressure and input from civil society organizations and academic experts could help keep pressure on tech companies, hold them accountable to the public, and create an environment where both government and corporate actors would better protect users’ rights. But in China, companies are primarily accountable to Beijing, not to users. It is only in instances where public concern aligns with state interests—most commonly, when the state can appear as “protector” of the people—that public pressure seems to come into play.

Even with new regulations, we can expect China’s tech giants to remain very profitable. The Chinese government’s various new and forthcoming tech-focused laws are intended to curb, but not drastically reduce, corporate power. They constitute a strategic and occasional application of pressure to assert state and Party power, and bring certain benefits to the government. This fits with the government’s long-standing mission to prioritize “healthy and orderly development,” a phrase that appears in countless industry guidelines and policies.

Will Beijing’s campaign to rein in China’s big tech companies persist? Law enforcement campaigns are not easy or cheap. At some stage, as other pressing issues arise, we can expect this agenda item to move lower down on the Party’s priority list, at which point tech companies may be even less inclined to honor their promises.

By Zak Rogoff & Nathalie Maréchal

Identifying content moderation solutions that protect users’ rights to free expression and privacy is one of the toughest challenges we face in the digital era. Around the world, digital platforms are getting due scrutiny of their content moderation practices from lawmakers and civil society alike, but these actors often disagree on how companies might do better. They also routinely fail to consult the people and communities most affected by companies’ failures to moderate content fairly.

But there is agreement in some areas, like on the need for more transparency. Indeed, there is a growing global consensus that companies should be much more transparent and accountable about their processes for creating and enforcing content rules than they are at present.

Today, we’re excited to join colleagues from around the world for the launch of the second edition of the Santa Clara Principles on Transparency and Accountability in Content Moderation, a civil society initiative to provide clear, human rights-based transparency guidelines for digital platforms.

Launched in 2018, the original Santa Clara Principles laid out essential transparency practices that companies could adopt in order to enable stronger accountability around their content moderation practices. The second edition of the principles builds on this work by acknowledging the particular challenges that companies must confront around the world, and by explicitly extending the principles to apply to paid online content, including targeted advertising.

To do this work, Ranking Digital Rights joined more than a dozen civil society organizations to seek feedback on the original set of principles from a range of stakeholders from around the world, to ensure the revised edition would reflect the needs of the diverse and growing body of people who use digital platforms. Our goal was to share our expertise in human rights benchmarking and encourage the coalition to publish standards that align with our standards on governance and freedom of expression, which we have used to evaluate the world’s most powerful tech and telecom companies since 2015.

In particular, we made the case that when it comes to targeted advertising, companies should be held to equal or higher levels of scrutiny and transparency as with the moderation of user-generated content. Beyond protecting the freedom of expression of advertisers themselves, this will help digital platforms take steps to prevent advertising that discriminates or that misleads, harasses, or otherwise interferes with users’ freedom of expression and information rights.

Independent research and reporting has shown that platforms do not adequately enforce national advertising laws, and that they sometimes even violate their own consumer protection-oriented rules. Transparency reporting is a necessary first step toward accountability in this area. Since our 2020 methodology revision, RDR’s indicators have advanced clear standards for advertising transparency that have influenced this and other important policy advocacy efforts.

Read the revised Santa Clara Principles.

Edzell garden

Edzell Castle, Angus, Scotland. Photo by John Oldenbuck via Wikimedia Commons. CC BY-SA 3.0

This is the RADAR, Ranking Digital Rights’ newsletter. This special edition was sent on November 16, 2021. Subscribe here to get The RADAR by email.

This season at RDR, we’ve done some deep thinking on one of the fastest-changing aspects of the industry we study: advertising models.

We’ve been waiting and watching to see how things change for Apple and the ad ecosystem around it, following the April 2021 rollout of its App Tracking Transparency(ATT) program, which requires app developers to get user consent before tracking them across the web. Was this move really driven by Apple’s commitment to privacy? Or did it have more to do with the company’s desire to edge out its biggest competitors in the digital ad space?

As of last month, the verdict is in: The majority of iOS users are not opting into third-party tracking — and Apple’s ad business has more than tripled its market share since April 2021, according to the Financial Times. FT also reported that ad revenues for major third-party app companies like Facebook and SnapChat have dropped by as much as 13 percent in what appears to be a result of the change.

Then there’s Google. Privacy nerds have heard about Google’s forthcoming “FLoC” system, which will move Chrome users away from third-party cookies and towards a “cohort-based” tracking model that the company says will be better for people’s privacy. But some are skeptical as to how much this program will really protect people’s privacy and security. In late October, we found more reason to worry when a federal judge in New York unsealed the amended 2020 antitrust suit filed against Google by 16 state attorneys general plus the AG of Puerto Rico, who together allege that this initiative is almost entirely profit-driven.

The suit lays out a litany of accusations that the company has engineered a quasi-monopoly over digital advertising markets, colluded with Facebook to control the market, and engaged in a host of related deceptive practices. On the FLoC front, it cites internal company documents indicating that Google’s so-called “Privacy Sandbox” (the origin of the FLoC system) was originally dubbed “Project NERA” and that it was intended to “successfully mimic a walled garden” in what a staffer described as an effort to “protect our margins.” RDR’s Aliya Bhatia and Ellery Biddle wrote about it this week in Tech Policy Press.

Although both Google and Apple say that they’re making these changes in order to better protect user privacy, the profit motives are clear, present, and enormous. While the changes may whittle away at the troves of data that so many digital companies have on us, they also will help to consolidate our digital dossiers in the hands of a few uniquely powerful platforms, and reduce or even eliminate many of the smaller players in the ecosystem. If we’re really moving to a paradigm where first-party tracking dominates the sector, we have to ask: How might this shift affect people’s rights and matters of public interest? RDR’s Ellery Biddle and Veszna Wessenauer dug into this in our latest blog post.
Read the post here →

RDR MEDIA HITS
Washington Post: The Facebook Files have put Meta’s controversial news feed ranking system back in the spotlight, causing some lawmakers to suggest that people should be able to use platforms like Facebook without having to submit to their recommendation algorithms. Speaking about the issue with the Washington Post, RDR’s Nathalie Maréchal said, “I think users have the right to expect social media experiences free of recommendation algorithms.” She also noted that while Meta’s research on chronological feeds may be compelling, it should be taken with a grain of salt: “…as talented as industry researchers are, we can’t trust executives to make decisions in the public interest based on that [internal] research,” she said. Read via Washington Post.

CBS News: When Facebook (now Meta) announced plans to end its use of some facial recognition systems, many privacy advocates celebrated. But RDR’s Nathalie Maréchal urged caution about the purported change, noting that the announcement came amid policymakers criticizing the company for putting profit ahead of people’s rights. The company is “trying to sidestep the real and extremely important questions about its governance…and [its] transparency record,” she said to CBS News. Lo and behold, Meta announced last week that it will continue collecting and using biometric data in the metaverse. Read via CBS News.

EVENTS
The Internet Governance Forum | Best Practices in Content Moderation and Human Rights December 8 at 11:30am ET | Register here
RDR’s Veszna Wessenauer will participate in a session at IGF on the relationship between digital policy and the established international frameworks for civil and political rights as set out in the UDHR and ICCPR.

Edzell garden

Edzell Castle, Angus, Scotland. Photo by John Oldenbuck via Wikimedia Commons. CC BY-SA 3.0

By Veszna Wessenauer and Ellery Roberts Biddle

When Apple announced its plans to tighten restrictions on third-party tracking by app developers, privacy advocates—including us—were intrigued. The company seemed to be charting a new course for digital advertising that would give users much more power to decide whether or not advertisers could track and target them across the web. But we also wondered: What was in it for Apple?

Now we know. The company’s advertising business has more than tripled its market share since it rolled out the App Tracking Transparency (ATT) program in April 2021, which requires app developers to get user consent before tracking them across the web.

Apple has become so powerful that it has changed the rules of the game to its own benefit, and it is now effectively winning. The Financial Times reported in October that Apple’s ads now drive 58 percent of all downloads in the App Store, and more recently reported that ad revenues for major third-party app companies like Facebook and SnapChat have dropped by as much as 13 percent as a result.

It is clear that Apple’s move, alongside Google’s forthcoming transition to tracking people in “cohorts” rather than at the individual level, could shake up the uniquely opaque (but almost certainly icky) underworld of the internet that is ad tech. Every second we spend online, advertisers hawking everything from prescription drugs to political candidates compete for our attention. Internet companies use the ever-growing troves of information that they have about us, much of it gathered up with the use of third-party cookies, to sell ad slots to the highest bidder. Today there is a vast ecosystem of companies that carry out this particular function of using our data to enable targeted advertising. But now two of the industry’s biggest companies are shifting away from this model, albeit in different formats.

Although both companies say that they’re making these changes in order to better protect user privacy, the profit motives are clear, present, and enormous. While the changes may whittle away at the troves of data that so many digital companies have on us, they also will help to consolidate our digital dossiers in the hands of a few uniquely powerful platforms, and reduce or even eliminate many of the smaller players in the ecosystem.

If we’re really moving to a paradigm where first-party tracking dominates the sector, we have to ask: How might this shift affect people’s rights and matters of public interest? We know a lot about how these systems will affect people’s privacy, but what about other fundamental rights, like the right to information or non-discrimination?

Third-party tracking is now tied to some of the most insidious and harmful targeting practices around. With the help of a massive amount of third-party data—collected from third-party websites or apps through technical means such as cookies, plug-ins, or widgets—advertising can be hyper-personalized and tailored to consumer segments or even individuals. Political campaigns can target us to the point that they can swing an election, or tell us to go vote on the wrong day. Conspiracy theorists can capture vulnerable eyeballs and convince people that COVID-19 is a hoax. But it’s not entirely clear that the move away from third-party tracking will change these dynamics.

We can only know how good or bad these moves really are for users’ rights, and for society at large, if we know what’s happening to our data, and if companies give us some ability to decide who gets it and how they can use it. Unfortunately, neither Apple nor Google (nor any of the companies we evaluate) have ever met our standards for these kinds of disclosures.

This season, we’ve been studying this impending shift, assessing the motivations that seem to be driving Apple and Google to make these changes, and comparing companies’ public statements about their plans to their actual policies on things like algorithms and ad targeting. We are using our own standards to inform our understanding of how these changes will affect users’ rights, and what human rights-centric questions we should be asking Google as it rolls out its new “FLoC” system.

Apple is getting creepy

In 2020, Apple’s announcement of the ATT plan triggered loud public criticism from Facebook (now Meta). Most users access Meta’s services via mobile devices, many of which are owned and operated by Apple. This makes Apple the gatekeeper for any application available to iPhone or iPad users, Meta included.

A very public tête-à-tête soon ensued, much of which stemmed from an open letter that we at RDR wrote to Apple, pressing the company to roll out these changes on schedule in the name of increasing user control and privacy.
In response to our letter, Apple Global Privacy Lead Jane Horvath wrote that “tracking can be invasive and even creepy.” She singled out Meta, saying that the company had “made clear that their intent is to collect as much data as possible across both first- and third-party products to develop and monetize detailed profiles of their users.”
We stand by our original position, which was rooted in our commitment to user privacy and control. But we don’t want to see these things come at the expense of competition.

With the new system in place and its newly dominant position in the ad market, we have to ask: What if Apple engages in similarly “creepy” practices by exploiting the boatloads of first-party data it has on its users? It is worth noting that while Apple now requires developers to explicitly capture user consent for tracking (via “opting in”), Apple users are subject to a separate set of rules about how Apple collects and uses their data. If they want to use Apple’s products, they have no choice but to agree. Also, recent research by the Washington Post and Lockdown suggests that some iPhone apps are still tracking people via fingerprinting on iOS, even when they’ve opted out.

The public face-off between the companies helped to clarify what actual motivations may have driven the change on Apple’s part. The changes put the company in an even more powerful position to capture, make inferences about, and monetize our data. If its ad revenues since the change was implemented are any indication, the plan is working.
Apple has published policies acknowledging that it engages in targeted advertising. But there’s a lot missing from the company’s public policies and disclosures about how it treats our data.

 

  • Apple has published no public documentation explaining whether or how it conducts user data inference, a key ingredient in monetization of user data.
  • Apple discloses nothing about whether or not it collects user data from third parties via non-technical means.
  • Apple offers no evidence that it conducts human rights impact assessments on any of these activities.

When it comes to FLoC, what should we be asking Google?

Although it won’t debut until 2023, we have some details about Google’s “Federated Learning of Cohorts” aka FLoC, a project of Google’s so-called Privacy Sandbox initiative. The company describes the system as “a new approach to interest-based advertising that both improves privacy and gives publishers a tool they need for viable advertising business models.” What the company doesn’t say is that this new paradigm may actually shut out other advertising approaches altogether.

From what Google has said so far, we know that FLoC will use algorithms to put users into groups that share preferences. The system will track those groups, rather than allowing each of us to be individually tracked across the web. Advertisers will be able to show ads to Chrome users based on these cohorts, which will contain a few thousand people each. The cohorts will be updated weekly, to make sure that the targeting is still relevant and to reduce the possibility of users becoming identifiable at the individual level.

The Electronic Frontier Foundation’s Bennett Cyphers has noted that this weekly update will make FLoC cohorts “less useful as long-term identifiers, but it also [will make] them more potent measures of how users behave over time.” It is also worth noting that the system will also make it much easier to effectively use browser fingerprinting techniques that do enable individual-level targeting.

Learn more about FLoC with these explainers from EFF and RestorePrivacy.

It is important to understand that Google is not actually moving away from a targeted advertising business model. All we really know at this stage is that FLoC will constitute a move towards a paradigm where fingerprinting technology becomes much more powerful and possible to deploy, and where signature tracking techniques are algorithmically driven. If it’s anything like Google Search, or the company’s other products, we can expect to find very little public information on how these algorithms are built or deployed.

We also expect that it will become even more difficult to audit and hold the company accountable than was the case with cookies, which are easy to test for privacy violations. Google has made big promises about supporting and building a more open web. But from where we’re standing, FLoC looks like a new variation on the walled garden.

In fact, documents that were recently unsealed in a massive antitrust suit filed against Google charge that this is all an effort to shore up power in the online advertising market. The suit cites internal company documents saying that Project NERA, the precursor to the Privacy Sandbox, was meant to “successfully mimic a walled garden across the open web [so] we can protect our margins.” The unsealed documents also suggest that the “Privacy Sandbox” name and branding were rolled out in order to reframe the changes using privacy language, and to deflect public scrutiny. The court filings also provide a lot of support for the idea that Google’s main constituency here is advertisers, not users.

Will this really work? Does Google have enough data about us for this to be effective? In short, yes. Google can afford to shift to a system like FLoC precisely because of its monopoly status in the browser market alongside other key markets. Thanks to its preponderance of services—Chrome Browser, Gmail, Google Drive, Google Maps, and, of course, Android—the company has access to incredibly rich and sensitive user data at scale, second to no other company outside China. While Google’s business model relies heavily on advertising, it does not need to rely on third-party data in order to be an effective seller of ad space. With this transition, it could effectively cut out the third-party ad sellers altogether.
It’s also important to consider how this change will affect the broader market. We’re moving from a diverse (if unsavory) array of players in the ad tech underworld, to a paradigm that will concentrate profit and power in the hands of a powerful few. Google controls over two-thirds of the global web browser market. Once the Chrome browser starts blocking third-party cookies, most internet users will be using browsers without third-party cookies.
Although it will probably bring some benefits for users, the change is clearly bad news for many of the actors in the ad tech ecosystem that heavily rely on third-party data and for ad tech firms selling and buying this data. For firms that are not able to collect data on users (in the ways that Google, Apple, or Facebook can) the end of third-party cookies will either snuff out or force radical changes for their business models.

Here are our key questions for Google:

Will users be able to see what groups they belong to and on what grounds under FLoC? Google should make it clear to users what controls they have over their information and preferences under FLoC.

How will Google identify and address human rights risks in its development and implementation of FLoC? Beyond privacy, targeted advertising can pose risks to other rights, like rights of access to information or non-discrimination. If the company identifies problems in these areas, how will it address them?

Will Google stop collecting third-party data on its users through non-technical means when it starts blocking third-party cookies through its browser? Companies may acquire user information from third parties as part of a non-technical, contractual agreement as well. For example, Bloomberg reported in 2018 that Google buys credit card information from Mastercard in order to track which users buy a product that was marketed to them through targeted advertising. Such contractually acquired data can become an integral part of the digital dossier that a company holds on its users and it can form the basis for inferred user information.

Most companies say nothing about whether and how they acquire data through contractual agreements, we found in the 2020 RDR Index.

None of the companies disclosed what user information they collect from third parties through non-technical means.

Data from Indicator P9 in the 2020 RDR Index.

As these companies consolidate power over our data, what should digital rights advocates focus on?

The fact that Google and Apple—both of which have made public commitments to human rights—are trying to position themselves as champions of privacy due to the changes they introduced or are planning to introduce raises questions about whether these companies consider other risks associated with targeted advertising beyond privacy.
In the 2020 RDR Index we introduced standards on targeted advertising and algorithmic systems to address harms stemming from companies’ business models. None of the digital platforms we ranked in 2020 assess privacy or freedom of expression risks associated with their targeted advertising policies and practices. Facebook was the only company that provided some information on how it assesses discrimination risks associated with its targeted advertising practices, and this was limited in scope.

When we think of some of the long-term societal effects of targeted advertising, like disinformation around elections and matters of public health, these questions must be part of the equation. People need and deserve to have accurate information about how to protect their health in a pandemic. But we know from independent research and reporting that targeted ads have had an adverse impact on people’s ability to access such information. When it comes to elections, jobs, housing, and other fundamental parts of people’s lives, we also know that Big Tech companies have enabled advertising that discriminates on the basis of race, gender, and other protected characteristics. This is equally harmful. In some cases, it is a violation of U.S. law.

Will the move away from third-party cookies mean the end of tracking and targeting? Not likely. User data is still seen as an essential way to generate added value for digital platforms. Companies like Google and Facebook are digital gatekeepers and have their own walled gardens of (first-party) user data that no one else can see. Google claims that with the introduction of FLoC it will not be possible to target individuals anymore, but it is unclear whether and how it will process and infer users’ browser data to allocate them into cohorts.

None of the companies in the 2020 RDR Index provided clear information on their data inference policies and practices.

Companies disclosed nothing about the selected indicators.

Data from Indicators P7 and P3b in the 2020 RDR Index.

Are any of these changes going to alter company business models to better align with the public interest? In the case of Google, Chrome users will no longer have to contend with the opacity of third-party tracking. Rather than wondering what third parties might have their data, and how they’re using it, they will know that most of their data sits with Google.

But without more transparency from the company, it will be just as impossible to find out how Google uses our data, and how our data might serve advertisers seeking to do things like swing an election or promote anti-vaccine propaganda. The same will be true for Apple. Until both companies are forced to put this information out for public view, we will have about as little knowledge of (or control over) how our information is being used as we do now.

London street art. Photo by Annie Spratt. Free to use under Unsplash license.

This is the RADAR, Ranking Digital Rights’ newsletter. This special edition was sent on October 21, 2021. Subscribe here to get The RADAR by email.

Since the Wall Street Journal’s release of the Facebook Files and the subsequent debut of whistleblower Frances Haugen in the public conversation, we’ve seen a lot of pushback from Facebook. Company executives have claimed that Haugen didn’t have sufficient knowledge about the practices she brought to light, argued that the WSJ series “mischaracterized” Facebook’s approach, and attacked a network of journalists working on a series of follow-up reports drawing on the documents.

The company can obfuscate and deflect as it wishes, but the data Facebook is willing to release—and that which it keeps private—speaks for itself. Companies often wax poetic about the social and commercial benefits that they bring to people and businesses, but when it comes to their concrete effects on people’s lives and rights, policies and practices are what actually count. That is what RDR is here to measure. Although we have a strong focus on company policies, which establish a baseline for what they say they will do, we also ask companies to publish concrete evidence of their practices, with things like transparency reports.

Last week, we “cross-checked” Facebook, comparing company statements and policies with the Haugen revelations, and with our own data and findings since 2015. Again and again, we see that in areas where Facebook is most opaque about its practices, such as targeted advertising and use of algorithms to enforce ad content policies, the hard evidence laid out by Haugen and other whistleblowers like Sophie Zhang paint a troubling picture of how the company treats its users. As Haugen told the U.S. Congress a few weeks ago, profits do take priority over the public interest at Facebook.

Read “Cross-checking the Facebook files” →

If Facebook’s decisions are mainly driven by profit, then we need to follow the money. Facebook’s earnings reports show that at least 98% of the company’s revenue comes from advertising, and we know that ad sales on Facebook are driven by the company’s vast data collection machine. That’s why we’ve joined Fight for the Future’s call on Congress to pass federal privacy legislation. We hope our friends and allies will consider doing the same.

See our 2020 report card for Facebook →

RDR’s 2020 encryption scores for digital platforms. See full results.

State and corporate eyes are still watching us. So let’s encrypt!

Happy Global Encryption Day! At RDR, we push companies to encrypt user communications and private content so that users can control who has access to them. In our 2020 research, we found that some of the world’s biggest companies still have a very long way to go on encryption.

Since 2015, we’ve evaluated companies’ use of encryption by looking for evidence that they encrypt the transmission of user communications by default and using unique keys. We also look for evidence that the company allows users to secure their private content using end-to-end encryption, or full-disk encryption (where applicable), and ask if these things are enabled by default. The chart above shows digital platforms’ scores on our encryption indicator from 2020.

We observed a steep decline in encryption standards for the Russian companies that we evaluate, Yandex and Mail.Ru, owing to proposed regulations that would limit its use. While Mail.Ru (owner of VKontakte) never had especially strong practices in this area, search engine leader Yandex distinguished itself on encryption in years past, out-performing Google, Facebook, and Microsoft as recently as 2019.

Of course private companies like the ones we rank are only part of the equation. Companies specializing in surveillance software continue to reap huge profits from sales to government agencies that target legitimate criminal activity, but also people like activists and journalists who are working to hold their governments to account. Thanks to years of research by groups like The Citizen Lab and Amnesty International, and the more recent revelations around the broad-based use of NSO Group’s Pegasus software, there is more hard technical evidence in the public domain than ever before of how these technologies are used and who they harm.

This week, we are proud to support a letter to the U.N. Human Rights Council pushing members to mandate independent investigations of the sale, export, transfer, and use of surveillance technology like Pegasus. We also join civil society groups around the world, in a campaign organized by the Internet Society, to call on both governments and the private sector to enhance, strengthen, and promote use of strong encryption to protect people everywhere.

Global investors are calling on tech companies to implement our recommendations

A group of global investors with more than $6T in assets called on the 26 tech and telecom companies we ranked in the last RDR Corporate Accountability Index to commit to some of our high-level recommendations. In concert with our report, the Investor Alliance for Human Rights brought together nearly 80 investor firms to support this effort. The group calls on companies to:

  • implement robust human rights governance;
  • maximize transparency on how policies are implemented;
  • give users meaningful control over their data and data inferred about them;
  • and account for harms that stem from algorithms and targeted advertising.

RDR Media Hits

Tech Policy Press: Will creating third-party recommender systems or “middleware” solve content problems on Facebook? At a recent symposium hosted by Tech Policy Press, featuring Daphne Keller, Francis Fukuyama, and moderated by Richard Reisman, RDR Senior Policy and Partnerships Manager Nathalie Maréchal explained why she’s not convinced. Alongside the numerous privacy-protection pitfalls with third-party recommender systems, this solution doesn’t address the core issue at hand: the surveillance capitalism business model. Read the transcript at Tech Policy Press.

MIT Tech Review: RDR Projects Director Ellery Biddle spoke with the Tech Review’s Karen Hao about the viability of Facebook whistleblower Frances Haugen’s proposal to regulate algorithms by creating a carve-out in Section 230 of the Communications Decency Act. In short, she says we’ll need a lot more transparency around algorithms before we can look to solutions like this one. Read via MIT Tech Review.

The Logic: The Government of Canada’s proposed online harms bill is “unworkable,” according to RDR’s Maréchal. She offered key points from RDR’s comments on the bill, in an interview with The Logic, a Canadian publication covering the innovation economy. Read via The Logic (paywalled).

National Journal: Maréchal also spoke with the National Journal to push back on Rep. Pallone’s proposed bill to reform Section 230, saying that the bill “falls into the same trap of all the other well-intentioned 230 bills.” Pointing to the experience of sex workers in the wake of SESTA/FOSTA carve-outs, Maréchal asserted that the carve-outs often lead to companies erring on the side of mass removals of content posted by users, forcing marginalized individuals off the internet. Read via National Journal.

Events

UCLA Institute for Technology, Law & Policy | Power and Accountability in Tech
November 5 at 4:00 PM ET | Register here

RDR Director Jessica Dheere joins UCLA’s week-long conference examining corporate power, multi-stakeholder engagement, and solutions to uphold human rights. Jessica will speak on a panel alongside Nandini Jammi, co-founder of Check My Ads; Lilly Irani, associate professor of Communication and Science Studies at UC San Diego; and Isedua Oribhabor, business and human rights lead at Access Now.

UCLA Institute for Technology, Law & Policy | Transparency and Corporate Social Responsibility
November 17 at 3:00 PM ET | Register here

RDR Senior Policy and Partnerships Manager Nathalie Maréchal will join UCLA professor Lynn M. LoPucki and SASB Standards Associate Director of Research Greg Waters to discuss the importance of transparency for accountable corporate governance in the tech sector.