RDR is proud to support Accountable Tech’s rulemaking petition to ban surveillance advertising. After extensive research, stakeholder consultations, and policy analysis, our team has concluded that the surveillance advertising business model poses a fundamental threat to civil and human rights, liberal democracy, and the public interest. There can be no targeting without surveillance, and this toxic business model must be abolished. To this end, we urge the FTC to use its rule-making authority to ban surveillance-based advertising.

At a recent hearing hosted by the House Representatives Energy and Commerce Committee, RDR Senior Policy & Partnerships Manager Dr. Nathalie Maréchal argued that “a business model that relies on the violation of rights will necessarily lead to products and behaviors that create and amplify harms.” Banning surveillance-based advertising is an urgent public policy concern in need of remedy. Whether this comes through FTC rulemaking or legislation (such as the Banning Surveillance Advertising Act introduced earlier this month), the time to act is now.

Read RDR’s submission to the FTC

Written by Dr. Nathalie Maréchal, senior policy & partnerships manager, and Alex Rochefort, policy fellow.


Today, Alphabet ($GOOGL) shareholders filed a set of proposals ahead of the annual shareholder meeting this spring covering major human rights issues ranging from algorithmic transparency to data security, to disinformation.

Ranking Digital Rights is proud to have worked with Alphabet shareholders and the Investor Alliance for Human Rights to develop one of the proposals, which calls on the company to conduct a human rights impact assessment on Google’s forthcoming Federated Learning of Cohorts (FLoC) technology. The company has billed FLoC as a privacy-respecting alternative to cookie-based tracking, which has enabled companies to collect massive volumes of personal data and use it to target ads. However, as we wrote last fall, Google’s dominant position on the data market and opacity about how it obtains and handles our data suggest that the shift to FLoC is actually intended to consolidate power over online ad targeting – not to improve user privacy.

We are also glad to see another proposal that would compel the company to stop offering dual-class shares in the sale of outstanding stock. The resolution calls on the company to ensure that future shares all have one vote per share and institute a timeframe to phase out dual-class shares. If approved, this would effectively eradicate the system that enables differential voting rights for a favored class of founders and insiders – like Larry Page and Sergey Brin –  and give standard voting power to independent shareholders, many of whom are trying to hold the company accountable to the public.

“Governments and tech titans are pushing to reassert control over people’s data and content, each in their own way,” said Jan Rydzak, RDR’s Company and Investor Engagement Manager in a statement published today by the Investor Alliance for Human Rights. 

“Alphabet will play a pivotal role in shaping this future landscape. Many of these proposals address areas of critical human rights risk, from falling in with authoritarian regimes to transforming how companies use personal data to target ads. Alphabet must show that it takes the risks of these endeavors seriously if it aspires to be a responsible steward of the internet.”

Read the entire statement here and Google’s filing here.

Mobile phone kiosk in Yangon, Myanmar.

Mobile phone stall in Yangon, Myanmar. Photo by Remko Tanis via Flickr (CC BY-NC-ND 2.0)

This is the RADAR, Ranking Digital Rights’ newsletter. This special edition was sent on December 12, 2021. Subscribe here to get The RADAR by email.

It’s our last Radar of 2021! As we look ahead to the new year—and International Human Rights Day, on December 10—we’re also looking back at significant changes and moments of reckoning that we saw in 2021, among the companies we rank.

Again and again, we saw governments taking bold steps to assert power over tech and telecom companies alike. The year began with the mob attack on the U.S. capitol, after which U.S. policymakers brought new levels of scrutiny to Meta (a.k.a. Facebook) and its unique role as a platform for organizing acts of violence. The subsequent emergence of the Facebook Papers has given rise to yet another series of hearings and inquiries into the company’s systems and profit models.  We could spend all our time talking and writing about Meta, but other companies demand our scrutiny too.

Telenor’s uncertain fate in Myanmar
In February, after the coup d’etat in which the military ousted Myanmar’s fragile civilian leadership, numerous major corporations sought to exit the country, including Telenor Myanmar, a subsidiary of the Norwegian-owned telco Telenor. In July, Telenor announced plans to sell its subsidiary in Myanmar to M1 Group, a Lebanese conglomerate with a record of corrupt practices and human rights abuses.

Soon thereafter, it came to light that the military had ordered telecommunications providers to install surveillance technology on their networks to help boost the military’s snooping capacity. Human rights advocates in Myanmar and around the world (including RDR) pushed Telenor to take responsibility for its human rights obligations and stand up against military demands, rather than simply throwing up its hands and walking away. To date, the military regime has stalled the sale, and two local companies with close ties to the military now appear to be vying for a stake in the operation.

Twitter is still blocked in Nigeria
In June, Twitter was blocked nationwide in Nigeria after the company removed a tweet from the official account of President Muhammadu Buhari. The tweet contained a veiled threat against Igbo people, who represent the third largest ethnic group in the country. Our colleagues at Paradigm Initiative called out their government for violating Nigerians’ rights to freedom of expression, which are protected under both local and international law.

Nigerian government spokespersons have emphasized that the decision to block Twitter came after numerous incidents of concern, including now-former CEO Jack Dorsey’s vocal support for Nigeria’s #EndSARS movement, in which young Nigerians used Twitter to demonstrate against extrajudicial violence carried out by state security forces in the country. Twitter remains blocked in Nigeria today.

Why did Irancell data disappear from MTN’s transparency report?
South African telco MTN earned high marks in the 2020 RDR Index for improvements in a variety of areas, and it released its first transparency report in 2020. The report offered key figures on things like government requests for user data and location information for most of MTN’s subsidiaries in Africa and the Middle East, including Irancell, the company’s Iranian subsidiary. But MTN’s most recent transparency report includes no data for Irancell. The report now notes that Iran (alongside Afghanistan, Botswana, Syria, and Yemen, which did not appear in the previous report either) was excluded due to “insufficient information and in-country reporting limitations.”

Our colleagues at Taraaz suspect that MTN Irancell—a joint venture with government-linked Kowsar Sign Paniz—faced pressure from the Iranian government to remove evidence of state efforts to surveil users or interfere with connectivity. Together, we called on the company to publicly explain the limitations it faced and why Iran was excluded from the report this year. We have yet to receive a reply.

Putin allies claim majority ownership over VK
Russian government efforts to control online speech and activities are ramping up yet again. Last week, Russia’s VK (parent company of VKontakte and Odnoklassniki, the country’s most popular social media services) was sold to Sogaz, a state-run insurer in Russia that is majority-owned by Yuri Kovalchuk, a key ally of Russian president Vladimir Putin. And Vladimir Kiriyenko, the son of top Kremlin official Sergei Kiriyenko, will become VK’s new CEO. VK has had close ties with the government for some time—original founder and CEO Pavel Durov said the FSB state security services pressured him to sell the company in 2014, when the network became a key tool for the Euromaidan uprisings in Ukraine. But this week’s sale may mark a new era of closeness between the company and the Kremlin. In a roundup about the sale and its implications for media freedom, Meduza writes:

“When businessmen ‘supervise’ something like a publishing house or an online platform, their concepts of freedom and nonfreedom might fall out of alignment with the Kremlin’s thinking, making scandals possible. That becomes much less likely when guessing the Kremlin’s thinking is as easy as telephoning Dad.”

What’s next?
From where we’re sitting, it looks as if governments are feeling more emboldened than ever either to coerce companies into operating for their benefit (see Myanmar and Iran), push them to the sidelines (see Twitter in Nigeria), or take definitive control of them, as in the case of Russia’s VK.

In her book Consent of the Networked, RDR founder Rebecca MacKinnon argued that the “convergence of unchecked government actions and unaccountable company practices threatens the future of democracy and human rights around the world.” As we look ahead to 2022, this convergence will be top of mind. We wish all our readers a happy new year, and look forward to reconnecting in January!

Nathalie Maréchal to testify before U.S. House Committee on Energy and Commerce
Today at 11:30am EST, RDR Senior Policy and Partnerships Manager Nathalie Maréchal will testify before the U.S. House Committee on Energy and Commerce, at a hearing focused on holding Big Tech accountable and promoting a safer internet for all.

Watch the hearing video →

In her remarks, Maréchal will argue that addressing harms through content-level policy provides only symptomatic relief, and that instead, Congress should look at what drives profits: a business model that is fueled by targeted advertising and opaque algorithmic systems that promote engagement at all costs.

Read Maréchal’s written testimony →

How is China’s Big Tech crackdown affecting people’s rights?
In recent years, China’s government has passed a raft of privacy and data protection-focused laws that have reined in some of the country’s biggest tech companies, at least for now. What effects do these laws have on the rights of regular Chinese users? In a new essay for RDR, Research Analyst Jie Zhang digs into this question. Here’s an excerpt:

These laws complicate narratives among media and policymakers in the west, who often portray China’s tech companies either as agents spreading Communist ideology and spying globally at the behest of Beijing, or as beacons of capitalism victimized by the Party’s relentless crackdowns that aim to show “who is the real boss.” There is some truth in each of these portrayals, but both fail to acknowledge the importance and rights of Chinese people. These lines of thinking also fail to account for the populist stance of Beijing.

Caught between the massive powers of the government on one hand, and tech companies on the other, users and their interests often get squeezed into a position where they have little sway. However, the three groups—the party, the public and the tech powers—are intertwined and do interact with each other in a dynamic (if sometimes shifting) equilibrium.

Read the essay. →

Santa Clara Principles 2.0: Advancing transparency standards for digital platforms
We’re excited to join colleagues from around the world for this week’s launch of the second edition of the Santa Clara Principles on Transparency and Accountability in Content Moderation, a civil society initiative to provide clear, human rights-based transparency guidelines for digital platforms. First launched in 2018, the original Santa Clara Principles laid out essential transparency practices that companies could take to enable stronger accountability for their content moderation decisions.

The second edition of the principles builds on this work by acknowledging the unique challenges that companies must confront around the world, and by explicitly extending the principles to apply to paid online content, such as targeted advertising.

Read our blog post. →

Other campaigns we’re supporting

  • Chilean digital rights group Derechos Digitales is demanding that Chile’s congress conduct an open, fully transparent public consultation on a bill that would regulate digital platforms in Chile. Read the statement.
  • European Digital Rights (EDRi) brought together civil society organizations to call on the Council of the European Union, the European Parliament, and all EU member states to ensure that the forthcoming Artificial Intelligence Act puts people’s fundamental rights first. Read the statement.

RDR MEDIA HITS

VICE News: As U.S. lawmakers craft regulations intended to curb misinformation and harmful speech on Meta, they need to be aware that technology alone cannot solve these problems. RDR’s Nathalie Maréchal told VICE News: “These proposals presuppose that some kind of technology exists that’s able to quickly differentiate, with great confidence, between an innocent mistake and willfully malicious disinformation. […] That doesn’t exist.” Read via VICE News.

Broadband Breakfast: RDR Director Jessica Dheere joined Kirk Nahra, partner at WilmerHale LLC, and Drew Clark, editor and publisher of Broadband Breakfast, for a discussion of how privacy regulation ushered in through rule-making and legislation could impact the market power of Big Tech and telecommunication monopolies. Watch via Broadband Breakfast.

On December 9, Ranking Digital Rights’ Senior Policy and Partnerships Manager Nathalie Maréchal will testify before the U.S. House Committee on Energy and Commerce, at a hearing focused on holding Big Tech accountable and promoting a safer internet for all.

The hearing will be livestreamed on December 9 at 11:30 AM ET. Watch the livestream here.

In her remarks, Maréchal will argue that addressing harms through content-level policy provides only symptomatic relief, and that instead, Congress should look at what drives profits: a business model that is fueled by targeted advertising and opaque algorithmic systems that promote engagement at all costs.

Read Maréchal’s written testimony.

Traffic in downtown Shanghai.

Traffic in downtown Shanghai. Photo by Nicholas Hartmann via Wikimedia Commons (CC BY-SA 4.0)

When the government of China put out a draft regulation on algorithms in August, it broke ground at a global scale. The draft laid out rules and standards for tech platform recommendation algorithms like no other government has. And it surprised some, especially Western onlookers, by introducing a handful of reasonable protections for users’ rights and interests.

The draft requires companies to be more transparent about their algorithmic systems and to allow users to opt out of such systems. It addresses tech platform addiction and it seeks some protections for people working in the platform-based gig economy (such as delivery workers). It also compels tech platforms to enforce “mainstream” (i.e., Chinese Communist Party) values.

People who have been watching the evolution of China’s tech policy regime in recent years saw the draft as a reflection of the major interests that the Chinese government and Communist Party have been working to balance: tech power on one hand, and public pressure on the other.

China is notorious for its digital censorship and public surveillance systems. But the state is not the only entity that poses a threat to Chinese people’s human rights. Until recently, Chinese tech companies were both enabling state efforts to control information and surveil the public, and reaping handsome profits by collecting and monetizing people’s data. Over the past decade, just like their Silicon Valley counterparts, China’s tech giants have abused user data, ignored market regulation, and deployed exploitative recommendation systems. And people have noticed. Public frustration about these practices and their effects on society reached a fever pitch in 2020 when China saw a spike in fatal traffic accidents resulting from food delivery workers trying desperately to keep up with the algorithmically-generated delivery times issued by their tech platform employers.

Public harms like these don’t just reflect poorly on big tech companies. They lay bare the lack of control that the government has over such corporations. And they pose a threat to the predominant position of the Chinese Communist Party (CCP) in Chinese society.

In order to assert authority over these companies, and to maintain or even improve their image as entities that serve and protect the public, the CCP (which makes key decisions about China’s policy environment) and the government (which implements those decisions) have pushed through a raft of tech-focused regulations in recent years—the Cybercrime Law, the Personal Information Protection Law, and the Data Security Law—that seek to rein in companies’ data collection and monetization powers and, in some cases, to actually improve protections for the public.

These laws complicate narratives among media and policymakers in the West, who often portray China’s tech companies either as agents spreading Communist ideology and spying globally at the behest of Beijing, or as beacons of capitalism victimized by the Party’s relentless crackdowns that aim to show “who is the real boss.” There is some truth in each of these portrayals, but both fail to acknowledge the importance and rights of Chinese people. These lines of thinking also fail to account for the populist stance of the state.

Caught between the massive powers of the government on one hand, and tech companies on the other, Chinese users and their interests often get squeezed into a position where they have little sway. However, the three groups—the party, the public, and the tech powers—are intertwined and do interact with each other in a dynamic (if sometimes shifting) equilibrium.

This essay explores some critical questions about this dynamic: What is the real reason for the Chinese government’s regulatory crackdowns on tech companies? To what extent is the state trying to placate public complaints about tech giants? And most importantly: How do these things affect millions of users’ interests and rights?

Are these new laws really benefiting users?

Western media often focus on how China’s changing regulatory environment affects the operations and business models of Chinese tech companies, but leave users’ rights out of the picture. At Ranking Digital Rights, we put users’ rights at the center of our research. Over time, our evaluations of three of China’s leading tech giants—Alibaba, Baidu, and Tencent—have shown how China’s regulatory environment has brought some benefits for people’s rights to privacy and security, as well as control over their information, albeit only in areas unrelated to Chinese government surveillance.

China’s regulation of user data collection has undergone a sea change since the adoption of the 2017 Cybersecurity Law, which focused on security and cybercrime protections and established principles of “legality, propriety, and necessity” in user information collection. It was followed by the September 2021 Data Security Law, an effort to protect critical information infrastructure, and then by the Personal Information Protection Law (PIPL), which went into force on November 1. A sweeping data privacy law, PIPL defines personal information and sensitive information, compels data processors to obtain users’ consent prior to collecting their data, and requires that companies allow users to opt out of targeted ads. It also put a ban on automated decision-making that can cause price discrimination.

These laws do appear to have brought increased protections for users wanting more control over how tech platforms use and profit from their data. When we reviewed Chinese companies’ policies alongside those of 11 other globally dominant digital platforms, Baidu and Tencent were more transparent about how they collect user information than all the other platforms we rank, including Google, Apple, and Microsoft. Both Baidu and Tencent made explicit commitments to purpose limitation, vowing only to collect data that was needed to perform a given service. Alibaba fell behind major Korean companies Kakao and Samsung, but still outranked all the major U.S. platforms. In our July 2021 evaluation of ByteDance (parent company of TikTok), we found that Douyin (TikTok’s Chinese counterpart) far outpaced TikTok on these metrics, by committing to only collecting necessary information for the service.

Data from Indicator P3a from the 2020 RDR Index

Both Baidu and Tencent also have improved their privacy policy portals, making it easier for users to access privacy policies for various products in one place.

We also found that all three companies provided much more information about their contingency plans to handle data breaches than they had in the past, and more than other companies across the board. This change was likely inspired by the Cybersecurity Law, which requires companies to plan for potential data breaches.

Smaller improvements have emerged as well. On Alibaba’s Taobao, users can opt out of recommendations with a single click. The same is true for targeted ads. These and other updates give the impression that the platforms want to protect the rights of users and stand with the government at the same time.

The drawbacks

It may seem like a happy ending to the story. China’s regulatory environment is clearly more privacy-protective than it was in the past, even as state surveillance practices continue unabated. But even though China’s tech companies have made the right changes to their policies, there’s strong evidence that many of them are not following their own rules.

Tencent and ByteDance have been plagued with scandals and denounced by Beijing for violating the “necessity” rule laid out in the Cybersecurity Law. In May this year, the Cyberspace Administration of China (CAC), the country’s top internet regulator, publicly identified 105 popular apps that had illicitly collected user information and failed to provide options for users to delete or correct personal information. These apps included Baidu Browser and Baidu App, a “super app” interface for finding news, pictures, videos, and other content on mobile. Soon thereafter, Tencent’s mobile phone security app, which is meant to protect the privacy and security of users’ phones, was disciplined by the CAC for collecting “personal information irrelevant to the service it provides,” despite the promises in its policies. Douyin was caught “collecting personal information irrelevant to its service,” despite the fact that its privacy policy states that it only collects user information “necessary” to realize functions and services.

In June 2021, digital news aggregator apps, including Today’s Headline (operated by ByteDance), Tencent News, and Sina News, were publicly rebuked by the CAC for collecting user information irrelevant to the service, collecting user information without user permission, or both.

In August, the Ministry of Industry and Information Technology (MIIT) publicly declared that Tencent’s WeChat (China’s most popular app) had used “contact list and geolocation information illegally.”

Anecdotally, Chinese users have voiced concerns that mobile apps are eavesdropping on their daily conversations, sometimes even when the microphone function is turned off. These accusations have implicated apps ranging from food delivery platforms, to Tencent’s WeChat, to Alibaba’s Taobao. Though it’s hard to find solid evidence, technical tests show that such snooping practices are feasible. Some users shared their experiences under the relevant topics on Zhihu, a Quorum-like platform in China. Ironically, that platform too was accused of eavesdropping on users’ private conversations.

The latest public condemnation from the Chinese government was announced in November. MIIT ordered 38 apps, including two apps run by Tencent, Tencent news and QQ Music, to stop “collecting user information excessively.” Soon after, the Ministry ordered Tencent to submit any new updates of its apps for technical testing and approval to ensure they meet national privacy standards. The company’s apps have been publicly accused of illegally collecting user information by MIIT four times in 2021.

Although the Personal Information Protection Law requires tech companies to allow users to opt out of targeted advertising, the companies have turned this into a battle of wits. Baidu technically allows users to do this, but the company’s privacy policy does not include any information on where or how to actually opt out. While PIPL was still pending, both Alibaba and Tencent maintained options for users to turn off ad targeting (which is on by default), but made the selection time-limited, so that users would be reverted back to the default after six months. Tencent did not cancel the time limit until October 29, when the company was sued in a court of Shenzhen City (where Tencent is headquartered) for infringing user rights. The lawsuit included accusations regarding the time limit on opt-outs for ad targeting. Taobao updated its privacy policy and the setting requirement in a hurry on November 1, the day PIPL took effect.

The draft regulation on algorithms and the voices of Chinese users

As it is still at the drafting stage, we don’t yet know what will appear in the final text of China’s regulation on algorithms. But the draft has one very specific provision that appears to be a direct response to public concern. It requires labor platforms (such as food delivery services) to improve their job distribution, payment, rewards, and punishment systems to protect the rights of contract laborers.

In 2020, Chinese media outlet Renwu reported on how the algorithmic systems powering China’s largest food delivery platforms, including Ele.me (owned by Alibaba) and Meituan (backed by Tencent), were exploiting delivery workers and all but forcing them to violate traffic laws. To keep up with the apps’ algorithmically optimized delivery times, workers were exceeding speed limits, running stop lights, and endangering people’s lives. In August 2020 alone, the traffic police of Shenzhen City recorded 12,000 traffic violations related to delivery workers riding mopeds or converted bicycles. Shanghai City data showed that traffic accidents involving delivery workers caused five deaths and 324 injuries in the first half of 2019. The Renwu story (available here in English) resonated with people’s daily experience in the street immediately, eliciting tens of thousands of comments.

The public response could not be ignored. Although civilians are rarely able to influence or shape legislation in China, public safety has become an area in which they do have some sway. The Communist Party, though powerful, needs to respond to public complaints and tie this to its efforts to regulate tech companies. An important part of the Party’s legitimacy comes from the notion that it is “serving the people.”

Chinese President Xi Jinping has emphasized this point in recent statements for state media: “The development of the internet and information industry must implement the people-centered idea of development and take the improvement of people’s well-being as the starting point and foothold of informatization, to enable people to acquire more sense of contentment, happiness, and safety.”

Although the draft regulation on algorithms covers a much broader range of issues than just worker rights and safety, it suggests that public pressure can play a role in policymaking in China, when certain conditions intersect.

The future of Chinese users’ rights

In another kind of society, direct pressure and input from civil society organizations and academic experts could help keep pressure on tech companies, hold them accountable to the public, and create an environment where both government and corporate actors would better protect users’ rights. But in China, companies are primarily accountable to Beijing, not to users. It is only in instances where public concern aligns with state interests—most commonly, when the state can appear as “protector” of the people—that public pressure seems to come into play.

Even with new regulations, we can expect China’s tech giants to remain very profitable. The Chinese government’s various new and forthcoming tech-focused laws are intended to curb, but not drastically reduce, corporate power. They constitute a strategic and occasional application of pressure to assert state and Party power, and bring certain benefits to the government. This fits with the government’s long-standing mission to prioritize “healthy and orderly development,” a phrase that appears in countless industry guidelines and policies.

Will Beijing’s campaign to rein in China’s big tech companies persist? Law enforcement campaigns are not easy or cheap. At some stage, as other pressing issues arise, we can expect this agenda item to move lower down on the Party’s priority list, at which point tech companies may be even less inclined to honor their promises.