RDR is now an independent initiative. Our website is catching up.  Read our announcement →

"Social Decay" artwork by Andrei Lacatusu, licensed for reuse (CC BY-NC-ND 2.0)

“Social Decay” artwork by Andrei Lacatusu, licensed for reuse (CC BY-NC-ND 2.0)

This is the RADAR, Ranking Digital Rights’ newsletter. This special edition was sent on September 23, 2021. Subscribe here to get The RADAR by email.

A bombshell series published last week by the Wall Street Journal shows how Facebook’s insatiable thirst for user data and consequent obsession with keeping users engaged on the platform takes priority over the public interest again and again, even when people’s lives and fundamental rights are at imminent risk of loss. It also provides new evidence that Facebook does not follow its own rules when it comes to moderating online content, especially outside the U.S.

In a September 18 blog post, Facebook’s VP of Global Affairs Nick Clegg wrote that the stories contained “deliberate mischaracterizations” of the company’s work and motives, but he pointed to no specific examples of details that the stories got wrong. The evidence — internal documents, emails, and dozens of interviews with former staff — is difficult to refute. And it is not especially surprising. It builds on a pattern that journalists, researchers, and civil society advocates have been documenting for years.

One story in the series offers a litany of instances in which Facebook employees tried to alert senior managers to brutal abuses of the platform in developing countries, only to have their concerns pushed aside. Former employees told WSJ of company decisions to “allow users to post videos of murders, incitements to violence, government threats against pro-democracy campaigners and advertisements for human trafficking,” despite all these things going against Facebook’s Community Standards. A former executive said that the company characterizes these issues as “simply the cost of doing business.”

Consistent with years of reports and grievances from global civil society, and more recent accounts from whistleblowers, these stories shed new light on Facebook’s long-standing neglect of human rights harms that stem from its platform, but occur far away from Menlo Park. One internal document showed that of all the time staff spent combatting disinformation, only 13 percent of it was devoted to disinfo campaigns outside the U.S.

The company likely prioritizes content moderation in the U.S. because it faces the greatest regulatory and reputational risks on its home turf. But this is no longer the epicenter of its userbase. With its ruthless pursuit of growth in the global south, the reality today is that most Facebook users do not live in stable democracies or enjoy equal protection before the law. As Facebook endlessly connects users with friends, groups, products, and political ads, it creates a virtual minefield — with real life-or-death consequences — for far too many people worldwide.

Think back to late August, when social media lit up with messages of Facebook and Instagram users in Afghanistan frantically working to erase their histories and contact lists. The company offered some “emergency response” measures, allowing Afghan users to limit who could see their feeds and contacts. But on a platform that constantly asks you to share information about yourself, your friends, your activities, and your whereabouts, is a band-aid solution at best.

In situations of violent conflict, contestation of political power, or weak rule of law, the protection of a person’s privacy can mean the protection of their safety, their security, their right to life. Matt Bailey underlined this in a piece for PEN America:

…in a cataclysm like the one the Afghan people are experiencing, this model of continuously accruing data—of permanent identity, publicity, and community—poses a special danger. When disaster strikes, a person can immediately change how they dress, worship, or travel but can’t immediately hide the evidence of what they’ve done in the past. The assumptions that are built into these platforms do not account for the tactical need of Afghan people today to appear to be someone different from who they were two weeks ago.

But this is not just a problem for people in Afghanistan, or Myanmar, or India, or Palestine, where some of the company’s more egregious acts of neglect have played out and received at least some attention in the West. The problem is systemic.

Facebook employees often cite “scale” as a reason why the company will never be able to consider every human rights violation or scrub all harmful content from its platform. But how exactly did Facebook come to operate at such an awesome scale? Perhaps more than any other social media platform, Facebook has cannibalized competitors and collected and monetized user data at an astonishing rate, putting these things ahead of all other interests, including the human rights of its 3 billion users.

Our work at Ranking Digital Rights rests on the principle that regardless of scale, companies have a responsibility to respect human rights, and that they must carry this out (as written in the UN Guiding Principles on Business and Human Rights) “to avoid infringing on the rights of others and address adverse impacts with which they are involved.” We push companies to commit to respecting human rights across their business practices, and then push them to implement these commitments at every level of their organization. Facebook made such a commitment earlier this year. But to what end?

As evidence of its disregard for people’s rights continues piling up, Facebook’s promises ring hollow, as do its lackluster efforts to improve transparency reporting and carry out (and actually act upon) human rights due diligence. Today, leaks from whistleblowers and former employees seem like the only reliable source of information about how this company actually operates.

For us, this begs the question: How valuable is it to assess Facebook’s policies alone? In this case, and with some of the other tech giants we rank, would it be more effective to expand our focus to include leaks and other hard evidence of corporate practice?

We don’t have all the answers yet, but as revelations like these become more and more frequent, we will continue asking these questions of ourselves, our peers, and the companies we rank. If tech companies do not want to tell the world how they work, how they profit, and how they factor the public interest into their bottom line, we will need to find new ways to force their hand.

Facebook is an ad tech company. That’s how we should regulate it.

RDR Senior Policy and Partnerships Manager Nathalie Maréchal is calling on platform accountability advocates to start following the money when it comes to regulating Facebook. In a recent piece for Tech Policy Press, Maréchal wrote:

[We] need to reframe the ‘social media governance’ conversation as one about regulating ad tech. Facebook, Twitter, YouTube, TikTok and the rest exist for one purpose: to generate ad revenue. Everything else is a means for producing ad inventory.

Maréchal also spoke with The Markup’s Aaron Sankin about Facebook’s claim that it supports internet regulations that would mandate specific approaches to content moderation. We think content moderation is important and raises really difficult questions, but we can’t let this distract us from ads, which are the main driver of Facebook’s profits.

“…[As] long as everyone is focused on user content and all of its discontents, we are not talking about advertising. We are not talking about the money,” Maréchal said. Read via The Markup

Telenor mobile shop in Yangon, Myanmar. Photo by Remko Tanis via Flickr (CC BY-NC-ND 2.0)

Telenor mobile shop in Yangon, Myanmar. Photo by Remko Tanis via Flickr (CC BY-NC-ND 2.0)

Another company in crisis: Telenor’s fraught departure from Myanmar

In July, Norwegian telecommunications firm Telenor announced plans to sell its subsidiary in Myanmar to M1 Group, a Lebanese conglomerate with a record of corrupt practices and human rights abuses. Since then, it has come to light that the Myanmar military, which took control of the country in a February 1 coup, ordered telecommunications providers to install surveillance technology on their networks to help boost the military’s snooping capacity. The sale has yet to be approved by the military regime, and industry sources cited by Nikkei Asia suspect the deal may be rejected.

Human rights advocates in Myanmar and around the world have been pushing Telenor to take responsibility for its human rights obligations and stand up against military demands. In August, RDR joined a coalition letter to Telenor Group board chair Gunn Wærsted calling for the company to either cancel or pause the sale in order to carry out robust due diligence measures, including consultation with local civil society, and publication of human rights impact assessments on the effects of the sale.

What’s new at RDR?

Changes are coming to the RDR Index! This spring, we looked back on five years of the RDR Corporate Accountability Index and made a major decision: In 2022, we will split our flagship research product into two separate rankings. Next April, we will release a new ranking of digital platforms. In October 2022, we expect to publish a new ranking of telecommunications companies. This approach will allow us to dedicate more time to studying the contexts in which these companies operate and to streamline our engagement efforts around all of the companies we rank.

The 2020 RDR Index, now in translation: The executive summary of the 2020 RDR Index is now available in six major languages: Arabic, Chinese, French, Korean, Russian, and Spanish! As in years past, we partnered with Global Voices Translation Services to translate these key components of our research. Check them out.

#KeepItOn: Campaign letters to prevent network shutdowns in Russia, Zambia, Ethiopia

As members of Access Now’s #KeepItOn campaign coalition to prevent network shutdowns worldwide, we supported the following advocacy letters in recent months:

EVENTS
Tech Policy Press symposium | Reconciling Social Media & Democracy
October 7 at 1:00 PM ET | Register here
At this convening to discuss various proposals to regulate the social media ecosystem, Nathalie Maréchal will join panelists including Francis Fukuyama, Cory Doctorow, and Daphne Keller to promote an approach to corporate governance that can advance human rights.

speech bubbles showing different languages

Translation by Icon Lauk via The Noun Project, CC BY.

Ranking Digital Rights has partnered with Global Voices Translation Services to translate key components of the 2020 RDR Corporate Accountability Index into six major languages: Arabic, Chinese, French, Korean, Russian, and Spanish!
Visit our translations page.

The RDR Index is global in scope. We evaluate 26 companies, whose products and services are used by over four billion people worldwide, in all kinds of cultures and contexts. The languages of our translations reflect this diversity, covering the most commonly spoken languages in the countries where the companies we rank are located.
We believe in strengthening and upholding the role of local civil society organizations, researchers, and advocates. Making our resources available in multiple languages is a key part of how we think about the reach and impact of our corporate accountability research and engagement. In September 2020, we published translations of our revised methodology, so that anyone around the world can use our standards to hold companies accountable and build unique advocacy campaigns.

I’m RDR’s global partnerships manager. Since joining RDR in February this year, I’ve been working to build partnerships and engage civil society groups all over the world. We want to encourage further scrutiny of technology companies across different countries and regions, particularly those that our in-house research does not cover. To achieve this, we’re nurturing relationships to collaborate with our allies and support their work, while continuing work to make our materials accessible to a broad range of stakeholders. This also includes developing resources and workflows to provide direct guidance to adapt RDR’s methodology to the specific goals and local contexts of our partners.
With these translations, we hope to support broader advocacy actions that can leverage our analysis and data and bring closer attention to our rigorous human rights standards.

It takes a village: We thank Global Voices for their work on the translations, as well as our regional partners for their help in reviewing and promoting these materials!

Get in touch: If you’re a researcher or advocate interested in learning more about our methodology, our team would love to talk to you! Write to us at info@rankingdigitalrights.org.

Image by Jayanti Devi via Pixahive. CC0

This is the RADAR, Ranking Digital Rights’ newsletter. This special edition was sent on July 14, 2021. Subscribe here to get The RADAR by email.

This spring, we’ve had our eyes on TikTok. Today, we’re releasing our first-ever evaluation of TikTok, Douyin (its counterpart in China), and their parent company ByteDance.

Despite politicians’ fears that it might put people’s data at risk of surveillance by Chinese authorities, TikTok’s popularity in the U.S. has skyrocketed over the past two years, bringing users a deluge of pop dance videos, comic impersonations, silly stunts involving pets and snack foods, and plenty of sponsored content.

But critics—many of whom are TikTok creators themselves—are asking poignant questions about the technology and the policies driving all this, and much more, on the app. In June, Black TikTok creators mobilized a strike, highlighting the app’s well-established tendency to promote videos of white users imitating dances choreographed by Black creators. Leaders of the strike highlighted the fact that white influencers—and the company itself—were profiting off of these videos, often without compensating or even crediting the dances’ originators.

Just last week, Black TikTok creator Ziggi Tyler posted a series of videos in which he showed how the platform’s creator marketplace rejected text that contained phrases like “black lives matter,” “black success,” and “black voices” while allowing phrases like “white voices” and “white supremacy.”

@ZiggiTyler shared a screencast of his attempts to type his bio on TikTok’s Creator Marketplace. In the frame on the right, the phrase “black lives matter” triggered the warning above: “To continue, remove any inappropriate content.”

When a reporter for Vox’s Recode blog asked TikTok about the problem, a company spokesperson cited a technical error stemming from the platform’s hate speech detection systems, and stated that “Black Lives Matter does not violate our policies.” Similar to other Big Tech companies, TikTok appears to be relying on its technical systems to make big decisions about things as complex as hate speech. But Tyler’s experience proves that the machines can’t handle it.

These creators are collectively spotlighting the fact that users (and the public in general) have little ability to hold TikTok accountable for everything from curatorial choices to algorithmic bias, thanks to the company’s lack of transparency about its algorithms and content policies and processes. This is a problem, especially for an app like TikTok, where creators form the backbone of its commercial success. And as the company’s popularity and user base continue to grow, so do its effects on people’s rights and the public interest.

Intrigued by these and other questions about the policies and practices of both TikTok and its China-based counterpart, Douyin, the team at Ranking Digital Rights decided to assess both apps, alongside ByteDance, their Beijing-based parent company. We were also naturally curious about ByteDance, which is the first Chinese social media company to achieve mass popularity outside the east Asian market, and to compete with major U.S. platforms like Instagram.

With this study, we used a subset of our human rights-based standards to assess how ByteDance’s policies set the tone for both platforms, to better understand how the internet governance practices of China-based companies change or persist when companies are operating outside of Chinese territory. We also sought to find out how TikTok’s policies for U.S. users compare with the policies of its dominant U.S.-based competitors.

Our results offer a complex picture. TikTok gave users some information about how it treats their speech and data—but not nearly enough. When we compared TikTok with Douyin, its Chinese counterpart, we saw critical differences in their policies, reflecting the legal and regulatory frameworks where the two services operate. But we also saw evidence of TikTok leveraging an aggressive combination of human and algorithmic content curation and moderation techniques prioritizing content that is entertaining and apolitical, similar to Douyin and other Chinese social media platforms.

Finally, on the hot topic of U.S. users’ data security, TikTok’s policies offer the same kinds of protections for user data as its U.S. competitors, and technical research by the Citizen Lab suggests that the company takes technical precautions similar to those of U.S. platforms in its efforts to protect user data. TikTok also says that U.S. user data is stored in the U.S. (with a backup in Singapore) and is at no risk of acquisition by the Chinese government. But we have little capacity to independently test or assess the company’s claims. So we have to take TikTok’s word for it.

We’re eager to talk about our findings and insights with our RADAR readers. Read the full study, download our dataset, and tell us what you think!

 

Image by Jayanti Devi via Pixahive. CC0

Authors: Zak Rogoff, Veszna Wessenauer, Jie Zhang

From time to time, Ranking Digital Rights assesses companies that are having a growing impact on the public interest and the protection of people’s rights, but are not covered in the RDR Corporate Accountability Index. For these studies, we apply a selection of our rigorous human rights-based standards to evaluate their policies and practices, and the potential risks they pose to human rights and the global information ecosystem. As with the RDR Index, our aim with these studies is to establish an evidence base that policy makers, investors, and civil society can use to hold these companies accountable and against which we can monitor their progress over time.

This spring, Ranking Digital Rights conducted a study on the privately owned, Beijing-based company ByteDance and its twin video-sharing services TikTok and China-based counterpart Douyin. We were naturally intrigued by ByteDance, which is the first Chinese social company to achieve mass popularity outside the east Asian market, and to compete with leading U.S. platforms such as Instagram.

We set three core objectives: First, we wanted to see how ByteDance’s governance choices regarding content, data security, and government censorship and data demands affect users’ rights. Second, we wanted to compare TikTok’s and Douyin’s policies, to increase our understanding of how Chinese internet governance practices change or persist outside of Chinese territory. Finally, we wanted to find out how TikTok’s policies for U.S. users compare with the policies of its dominant U.S.-based competitors, mainly Instagram and YouTube, particularly in light of geopolitical and free speech controversies that have emerged with the rise of TikTok in the U.S.

Key questions and answers

Are TikTok’s U.S. policies substantively different from those of similar U.S.-based platforms?

No. While TikTok’s policies and practices stand out in a few small ways, the platform is largely aligned with its major competitors in the U.S. (such as Instagram and YouTube) when it comes to policies affecting users’ freedom of expression and privacy.

Are TikTok users subject to greater human rights risks, given that the platform’s parent company, ByteDance, is headquartered in China?

It’s hard to say. TikTok’s policies offer the same kinds of protections for user data as its U.S. competitors. Technical research by the Citizen Lab also suggests that the company takes technical precautions similar to those of U.S. platforms in its efforts to protect user data. TikTok says that U.S. user data is stored in the U.S. (with a backup in Singapore) and is at no risk of acquisition by the Chinese government. But we have to take TikTok’s word for it.

What does the policy contrast between TikTok (in the U.S.) and Douyin tell us about how Chinese companies operate both at home and in foreign jurisdictions?

The two platforms’ policy environments reflect critical differences in the legal and regulatory frameworks where they operate. With that being said, we observed that TikTok leverages an aggressive combination of human and algorithmic content curation and moderation techniques that appear to prioritize content that is entertaining and apolitical, similar to Douyin.

Why did we decide to study ByteDance?

Founded in 2012, ByteDance is headquartered in Beijing and legally domiciled in the Cayman Islands. It has various video sharing, social media, news, and web search products that are popular in China. Outside China, it is mostly known for its video sharing service, TikTok. For this study, we looked at two ByteDance services: TikTok and its counterpart in China, Douyin. TikTok and Douyin are both short-form video sharing platforms that share most of the same key features. They target the U.S./international and Chinese markets, respectively.

Each service has a broad user base in its target market. TikTok said in August 2020 that it had about 100 million monthly and 50 million daily active U.S. users, up nearly 800% from January 2018. According to App Annie, a mobile data and analytics company, TikTok was the most downloaded app from iOS and Android app stores in 2020, ahead of Facebook, Instagram, and YouTube. Douyin hit 600 million daily active users as of August 2020, according to ByteDance.

Both Douyin and TikTok leverage the same surveillance capitalism principles of behavioral data collection and monetization that have exploded profits for Big Tech companies in the U.S. Both apps track everything from users’ locations to likes and follows to the amount of time they spend looking at specific videos in order to serve them “personalized” organic and sponsored content. Both apps also leverage the popularity of certain users (known as “creators”) to broker sponsorship deals with third-party companies that pay creators to promote their products or services to users of the app.

Our decision to evaluate ByteDance, a privately held company, marks a departure from our typical standard, which is to evaluate only publicly traded companies. As a privately held company, ByteDance has no mandate to disclose information about its corporate governance to the public, as required by major stock exchanges and regulators in most markets. This gives us fewer avenues for putting pressure on the company. Nevertheless, we believe that the large-scale human rights and public interest implications of ByteDance’s services and the exceptionally high degree of public attention on the company, due to its rapid growth and its symbolic value in the U.S.-China relations, merit our scrutiny.

TikTok has been in the political crossfire amid rising tensions between the U.S. and China, with policymakers worrying that Chinese authorities might have easy access to the data of TikTok users in the U.S. TikTok has publicly affirmed that U.S. user data is stored in the U.S. (with a backup in Singapore) and is at no risk of acquisition by the Chinese government, yet concerns about the data security of U.S. users have persisted among policymakers. Former U.S. president Donald Trump attempted to ban the app in an executive order in 2020 that was refuted by the courts and then officially reversed by President Joe Biden in June 2021. The Biden Administration put forth a new order that will set in motion “rigorous, evidence-based analysis” of certain software products owned by foreign adversaries, including China, “that may pose an unacceptable risk to U.S. national security.”

Although discussions about TikTok have been dominated by security-focused policy conversations and geopolitical concerns, particularly in the U.S. and India (where the app was banned in 2020), the service has unique qualities affecting freedom of expression and information in ways that differ from what we see on other popular platforms in the U.S. ByteDance is the first Chinese social media company to offer a social media service that is actively competing with the biggest U.S. platforms, like YouTube and Instagram. The fact that TikTok is owned by a Chinese company is important not just from a privacy standpoint, but from a content governance perspective as well.

Many experts argue that algorithmic recommendation is the main driver of the popularity of both TikTok and Douyin. For this study, we wanted to further examine this theory and assess the companies’ stance on freedom of expression, alongside privacy and security. We sought to understand the implications of the Chinese ownership of two twin services with very different target markets, demonstrate the impact of different legal and political environments on the policies and practices of these twin services, and see how they affect users’ human rights.

How did we do the research?

For this study we looked at the policies of Douyin in China, TikTok in the U.S., and parent company ByteDance. Although TikTok operates internationally and has different policies for various geographic areas, we elected to focus on its policies for the U.S. for two reasons. First, the U.S. is TikTok’s flagship overseas market, with 100 million active monthly users, and a growing group of stakeholders are investigating the platform’s policies and practices. Second, we wanted to be able to compare our findings for TikTok, and our findings for major U.S.-based social media services like Instagram and YouTube from the 2020 RDR Index, where we evaluated platforms’ policies in their home markets only.

We selected 39 of our indicators (out of the full list of 58) that would best measure the most prominent human rights risks for users of either service. Since we picked two services that pose a number of human rights risks stemming from their business models and heavy use of algorithms, we included our indicators on targeted advertising, algorithmic systems, and content governance. We also sought an empirical basis for the national security and privacy concerns that governments and the media have come to associate with TikTok. Therefore, we included our indicators assessing transparency around privacy, information security, and government demands to access user information.

We reviewed the public documents disclosed by the company, including policies provided to users and business partners, company blog posts, and reports against the criteria of each element contained in the 39 indicators selected. Each indicator comprises a set of questions (what we call “elements”) about the company’s policy or practice in a specific area. We give each service one of three possible scores for each element: “full credit” (100), “partial credit” (50), and “no disclosure found” (0) or “no credit” (0). Each service receives a per-indicator score reflecting the mean value of all elements in the indicator. Learn more about our methodology.

Alongside our indicator-based evaluation of ByteDance and its video-sharing services, we reviewed independent research of TikTok by the Citizen Lab and the Mozilla Foundation. We also reviewed independent media coverage and commentary about the company and a series of leaked internal documents from TikTok that sparked investigations by The Guardian, The Intercept, and German digital rights blog Netzpolitik.

Our research findings

We rank companies on their digital rights governance, and on their policies and practices affecting freedom of expression and privacy. Our findings are organized by these categories below. In certain cases, we compare our findings for TikTok and Douyin with our data for Instagram, from the 2020 RDR Index. Our primary objective here is to give readers an idea of how TikTok compares to one of its most prominent U.S.-based competitors.

Digital rights governance

In contrast to other large multinational tech companies, ByteDance offers very little public documentation of governance policies or practices that affect people’s rights to free expression and privacy. TikTok has distinguished itself from its parent company in some policy areas that directly affect users’ rights, by doing things like publishing transparency reports, but overall, the platform does not make an explicit commitment to human rights, or conduct human rights due diligence, in accordance with our standards.

Values represent combined average indicator scores for each issue area. See appendix for more.

Neither ByteDance, nor TikTok, nor Douyin pledged to protect privacy or freedom of expression as defined by human rights law (G1), nor did either service conduct human rights impact assessments, a key tool for companies seeking to prevent their products and services from causing human rights harms (G4). This is typical of Chinese social platforms ranked by RDR, but it puts TikTok behind major U.S. peers such as Instagram (owned by Facebook), which conducts human rights impact assessments in some key areas, including its processes for policy enforcement and its approach to government regulations and policies that affect freedom of expression and information and privacy.

Content governance

User content rules/governance and enforcement
Indicators G6b, F1a, F3a, F4a, F4b

Both services provided public content rules that were easy to find and understand (F1a), though Douyin was slightly more detailed in explaining the circumstances under which it may restrict content or user accounts (F3a) and appeared to offer a more comprehensive system for users to appeal moderation decisions (G6b). Leaks have revealed that TikTok also maintains more detailed internal rules that are not visible to the public. TikTok reported more data than Douyin about the nature and volume of its enforcement actions (F4a, F4b), roughly on par with Instagram.

Values represent combined average indicator scores for each issue area. See appendix for more.

Ad content and targeting rules and enforcement
Indicators F1b, F1c, F3b, F3c, F4c

Advertising is the primary source of revenue for both ByteDance services, similar to other major platforms in China and the U.S. Whereas Douyin’s advertising policies were jumbled and hard to find, TikTok was more transparent about its ad policies and enforcement actions, narrowly surpassing Instagram’s score on this metric in the 2020 RDR Index. A Mozilla study found that the company did not fully enforce its advertising policies when it came to content sponsored (i.e., paid for) by third parties shared by TikTok influencers, a misstep for which Instagram also has been criticized. Douyin failed to provide any data about the volume and nature of its enforcement of ad content policies (F4c).

Algorithms, bots
Indicators F1d, F12, F13

Like most companies, neither service provided comprehensive rules governing their use of algorithmic systems (F1d). However, both services offered disclosures describing their algorithmic curation processes (F12), and TikTok published a dedicated document for this purpose, which scored better than any other service ranked by RDR in 2020, including YouTube and Instagram. The document explains design considerations and some of the elements of user behavior that influence the algorithm, but it is far from comprehensive. Though ByteDance’s public materials do not mention this, leaked internal documents have shown that the algorithm also takes input from TikTok staff, who assign content to different levels of algorithmic amplification. Although we were not able to find similar information about Douyin’s practices, the general similarity of the services suggests this takes place on that platform as well, and Chinese blogs have discussed the existence of such a process.

Government demands to censor content
Indicators F5a, F6, F8

Along with government demands to access user information, government censorship demands are where we see the starkest difference between ByteDance’s two services. Unsurprisingly, this reflects China’s unique political and legal environment. Douyin discloses almost no information about its processes or data related to such demands, though a former ByteDance employee claimed they receive up to 100 per day. While it is not as clear and thorough in its disclosures as competitors such as Instagram, TikTok does regularly report on such demands, and offers this data broken out by country of origin.

Values represent combined average indicator scores for each issue area. See appendix for more.

 

Privacy and security

Government demands to access user information
Indicators P10a, P11a, P12

Only TikTok offered meaningful disclosure in this area. Its biannual transparency reports break out government demands for user data by country, though it is worth noting that these reports do not mention any data requests from the government of China. Douyin offers no such information. Although there are no laws or regulations in China prohibiting Chinese companies from releasing data about government demands to access user information, the political and legal environment discourages companies from doing so.

User information
Indicators P1a, P1b, P3-P9

Our data highlights the contrast between legal regimes for user data protection in China, which covers these areas with its 2017 Cybersecurity Law and a pending data protection law, and the U.S., which has no comprehensive data protection law. Douyin outperformed TikTok on our indicators for its clearer and more comprehensive disclosures of what information it collects (P3a), infers (P3b), and retains (P6), as well as its purposes for doing so (P5). Unlike TikTok, Douyin pledged to collect only data that is reasonably necessary for its functionality, as required by Chinese law, but it has been reprimanded for poor compliance with these requirements. A technical analysis by the Citizen Lab found no discrepancies between what the two apps’ privacy policies say and what information their systems actually collect. Despite its overall advantage in this area, Douyin provided fewer options for users to access (P8) or control the use of (P7) their information than TikTok.

Values represent combined average indicator scores for each issue area. See appendix for more.

Security
Indicators P13-P17

While TikTok has no published policy regarding data breaches, Douyin received a perfect score for pledging to notify users and help them navigate the consequences of such information leaks (P15), in accordance with China’s cybersecurity law. Nevertheless, TikTok outperformed Douyin on security-related indicators, largely because it offered multi-factor authentication to protect users’ accounts (P17) and made it much easier for external researchers to submit reports of security vulnerabilities (P14). Douyin has a bug-bounty program, but does not provide multi-factor authentication.

 

Download the complete data set, or get in touch!

We invite you to download our full dataset [.XLSX  / .CSV] and find your own insights! This includes extensive excerpts from the two services’ public disclosures, analysis of their alignment with RDR’s rigorous human rights indicators, and a complete list of our sources. Contact us at info [at] rankingdigitalrights.org with questions about the analysis or data collection.

 

APPENDICES

Appendix A: Our indicators

For this study, we selected 39 of our indicators (from the full list of 58) that would best measure the most prominent human rights risks for users of either service.

G: Digital rights governance

F: Freedom of expression

P: Privacy

 

Appendix B: Indicator Groups

Each of our charts shows aggregate scores for indicator groups listed below. Each aggregate score represents the average of scores for each indicator in the group.

C. Sources list

In addition to conducting our own research, drawing on policies and other documents published by ByteDance, Douyin, and TikTok, we also relied on the work of other organizations that have studied and investigated TikTok and Douyin.

On July 1, 2021, RDR Senior Policy and Partnerships Manager Nathalie Maréchal testified before the United States International Trade Commission in the context of its investigation into foreign censorship policies and practices affecting US companies. The investigation was initiated in response to a request from the US Senate Finance Committee concerning censorship as a non-tariff barrier to trade. Below is her written testimony.

Good morning and thank you for inviting me to testify. I am Nathalie Maréchal, Senior Policy & Partnerships Manager at Ranking Digital Rights (RDR). Previously, I was a doctoral fellow at the University of Southern California, where I researched the rise of digital authoritarianism, the transnational social movement for digital rights, and the role of the U.S. Internet Freedom Agenda in advancing freedom of expression, privacy, and other human rights around the world. 

RDR is an independent research program housed at the New America think tank. RDR works to promote freedom of expression and privacy on the internet by ranking the world’s most powerful digital platforms and telecommunications companies on international human rights standards. Our Corporate Accountability Index evaluates 26 publicly-traded digital platforms and telecom companies headquartered in 12 countries. Among them are the U.S. “Big Tech” giants: Apple, Facebook, Google, and Microsoft, but also some of the largest companies in China, such as Baidu and Tencent. All told, these companies hold a combined market capitalization of more than USD $11 trillion. Their products and services affect a majority of the world’s 4.6 billion internet users.

At RDR, we believe that companies should build in respect for human rights throughout their value chain. They should be transparent about their commitments, policies, and practices so their users and their communities can hold them accountable when they fall short. Foreign censorship impedes them from doing this by requiring them to participate in human rights violations and limiting what they can disclose about their own operations. This is not a new problem: the first Congressional hearing on the topic took place in 2007, after Yahoo! turned over the email accounts of two democracy activists to the Chinese government. But it is a problem that grows more urgent every year, as more and more social, political and economic activity is mediated through internet companies—especially in the pandemic context—and governments develop new strategies and tactics to control the flow of information online, with grave consequences for democracy and human rights—and trade. The U.S. government and American companies must play a leading role in ensuring that all human rights, including freedom of expression and information, are respected online as well as offline. 

Governments use strategies—known as information controls—that go beyond simply suppressing speech in order to control public discourse and thus manipulate domestic and foreign populations, often with the consequence or even the aim of violating human rights. Information controls comprise “techniques, practices, regulations or policies that strongly influence the availability of electronic information for social, political, ethical, or economic ends.” All of these strategies have implications for U.S. companies’ ability to enter and compete in foreign markets and constitute non-tariff barriers to trade. They make it more expensive for American companies to respect human rights, and can result in companies adopting policies and practices that directly undermine U.S. foreign policy priorities. 

Freedom of expression and information as an international human right

On June 16, the 10th anniversary of the UN Guiding Principles on Business and Human Rights (UNGPs), Secretary of State Anthony Blinken renewed the United States’ commitment to advancing business and human rights under the framework set out in the UNGPs, which says: 1) States have the duty to protect human rights; 2) businesses have a responsibility to respect human rights; and 3) victims affected by business-related human rights issues should have access to remedy. 

The cooperation of private companies like internet service providers (ISPs), telecom operators and over-the-top (OTT) intermediaries like social networking sites and messaging apps is almost always required for information controls to be effective. And given the leading role that American companies have played in the growth of the global internet, this means that American companies are often implicated. 

But again, American companies doing business in foreign markets have a responsibility to respect freedom of expression and information even when national governments fail to do so themselves. Of course, they also have the responsibility to do this within our borders, though I recognize that is not the focus of this hearing.

Information controls: Policies and Practices

Today I will talk about four broad information control strategies: technical barriers to access; content removals within social media platforms; measures intended to cause chilling effects or self-censorship; and online influence campaigns.

The most blatant technical barriers to access are:

  • Network shutdowns and disruptions: Governments frequently order ISPs and mobile operators to shut down network access in specific areas, often coinciding with political events like elections, protests, and armed conflict. They may also demand that companies filter the specific protocols associated with VoIP calls or even individual messaging services like WhatsApp. The companies that produce the hardware and software required for network operations are under pressure to build these capabilities into their products.

  • A more precise technical approach is to block specific web services, sites and pages: These measures prevent the population from accessing forbidden content online, essentially aiming to transpose national boundaries from the physical world into cyberspace. China’s “Great Firewall,” which prevents internet users in mainland China from accessing a broad range of foreign websites is a classic example.

The second strategy is to restrict content within social media platforms, which can be done in a number of ways:

    • Many countries prohibit specific types of expression, thus creating legal requirements for OTT services to moderate user content according to local law. For example, Thailand prohibits insulting the king and his family; Russia forbids so-called “LGBT propaganda”; in Turkey it is a crime to “insult the nation.” Internet companies that operate in those markets are often required to proactively identify and restrict such content, either by removing it altogether or by restricting access to it within the country in question. When they do so, they are in effect acting as censors on behalf of the local government. However, companies struggle to identify and restrict all instances of potentially rule-breaking content without also censoring legal speech.
    • Authorities can issue legal requests to take down or geographically restrict specific user accounts or pieces of content. Many platforms will only consider demands sent by a court or other judicial authority within a proper legal framework, and are publicly committed to pushing back against illegal or overly broad requests.
    • Some countries, including China, hold internet intermediaries like social media platforms legally responsible for their users’ illegal speech or content. These intermediary liability regimes incentivize companies to aggressively moderate content using a combination of AI tools and human labor that often results in false positives.
  • Governments also abuse companies’ own content moderation processes. Most social media platforms’ user content rules prohibit types of expression that are legal under national law but that governments may nevertheless want to restrict, like representations of groups designated as terrorist organizations. Governments can report such content to companies through user reporting or “flagging” mechanisms in order to have the content restricted outside of any legal process.
  • Secret or informal relationships with companies are by definition, hard to detect, but journalists have found evidence suggesting that senior social media company employees maintain relationships with high-ranking government officials or their political parties. This can lead to content moderation decisions that benefit the government or political party in question. 

The third strategy is to create chilling effects or a culture of self-censorship: Academic research has demonstrated that people self-censor when they know or suspect they are under surveillance, and may face repercussions for their online expression or activity. Specific policies and practices governments take to produce chilling effects include intermediary liability regimes, and

  • Engaging in targeted surveillance of activists and civil society groups who oppose authoritarian governments.
    • Banning end-to-end encryption used in secure messaging tools or requiring the use of “responsible encryption” exposes internet users to surveillance risks and repercussions for their online speech.
  • “Real name” policies and ID requirements that force users to register their SIM cards with the authorities, provide proof of identity when using an internet cafe, and link their online activities to their “real name” make anonymous speech impossible, creating “chilling effects” that inhibit the expression and even the consumption of controversial online content.
  • Data localization requirements can also create chilling effects. Since the 2013 Snowden revelations, many governments now require that data about their citizens be stored within their borders, ostensibly to protect the data from U.S. intelligence. However, in many cases the real effect of data localization is to make the data easier to access for domestic intelligence and law enforcement.

The fourth information control strategy is online influence campaigns. Governments increasingly seek to control public opinion not by preventing the production and dissemination of information they dislike, but by denying it the public’s attention by flooding the public sphere with false, misleading, or distracting information: this is censorship by “distributed denial of attention.” The spread of these tactics has led to the current misinformation and disinformation crisis. In response to this crisis, a wide range of actors, including governments and civil society organizations, have called on companies to adopt and enforce stricter rules against mis- and disinformation on their platforms. As with other types of potentially harmful content, company efforts to restrict influence operations can result in collateral censorship of legitimate expression that is protected under international human rights law.

Limiting companies’ ability to enforce their own content rules is the next frontier in information controls. When companies crack down on hate speech, incitement and disinformation, they sometimes limit or censor the speech of government actors or political parties. Last month, Twitter removed a tweet from the official account of Nigeria’s president that contained a veiled threat against Igbo people, who represent the third largest ethnic group in the country. The next day, Twitter was blocked nationwide and officials threatened to arrest anyone using the service via VPN. This has created serious consequences for Twitter, and has also left people in Nigeria—the largest country in Africa, with an estimated 40 million Twitter users—unable to use the service.

In conclusion: digital authoritarians aim to structure the information environment in ways that are beneficial to their own strategic narratives, and detrimental to discourse that challenges them. By addressing the negative effects of foreign censorship on U.S. companies, we will enable those companies to do a better job of upholding their human rights obligations and setting an example for companies around the world.

Thank you again for the opportunity to testify today. I look forward to your questions.