Image by Jayanti Devi via Pixahive. CC0

Authors: Zak Rogoff, Veszna Wessenauer, Jie Zhang

From time to time, Ranking Digital Rights assesses companies that are having a growing impact on the public interest and the protection of people’s rights, but are not covered in the RDR Corporate Accountability Index. For these studies, we apply a selection of our rigorous human rights-based standards to evaluate their policies and practices, and the potential risks they pose to human rights and the global information ecosystem. As with the RDR Index, our aim with these studies is to establish an evidence base that policy makers, investors, and civil society can use to hold these companies accountable and against which we can monitor their progress over time.

This spring, Ranking Digital Rights conducted a study on the privately owned, Beijing-based company ByteDance and its twin video-sharing services TikTok and China-based counterpart Douyin. We were naturally intrigued by ByteDance, which is the first Chinese social company to achieve mass popularity outside the east Asian market, and to compete with leading U.S. platforms such as Instagram.

We set three core objectives: First, we wanted to see how ByteDance’s governance choices regarding content, data security, and government censorship and data demands affect users’ rights. Second, we wanted to compare TikTok’s and Douyin’s policies, to increase our understanding of how Chinese internet governance practices change or persist outside of Chinese territory. Finally, we wanted to find out how TikTok’s policies for U.S. users compare with the policies of its dominant U.S.-based competitors, mainly Instagram and YouTube, particularly in light of geopolitical and free speech controversies that have emerged with the rise of TikTok in the U.S.

Key questions and answers

Are TikTok’s U.S. policies substantively different from those of similar U.S.-based platforms?

No. While TikTok’s policies and practices stand out in a few small ways, the platform is largely aligned with its major competitors in the U.S. (such as Instagram and YouTube) when it comes to policies affecting users’ freedom of expression and privacy.

Are TikTok users subject to greater human rights risks, given that the platform’s parent company, ByteDance, is headquartered in China?

It’s hard to say. TikTok’s policies offer the same kinds of protections for user data as its U.S. competitors. Technical research by the Citizen Lab also suggests that the company takes technical precautions similar to those of U.S. platforms in its efforts to protect user data. TikTok says that U.S. user data is stored in the U.S. (with a backup in Singapore) and is at no risk of acquisition by the Chinese government. But we have to take TikTok’s word for it.

What does the policy contrast between TikTok (in the U.S.) and Douyin tell us about how Chinese companies operate both at home and in foreign jurisdictions?

The two platforms’ policy environments reflect critical differences in the legal and regulatory frameworks where they operate. With that being said, we observed that TikTok leverages an aggressive combination of human and algorithmic content curation and moderation techniques that appear to prioritize content that is entertaining and apolitical, similar to Douyin.

Why did we decide to study ByteDance?

Founded in 2012, ByteDance is headquartered in Beijing and legally domiciled in the Cayman Islands. It has various video sharing, social media, news, and web search products that are popular in China. Outside China, it is mostly known for its video sharing service, TikTok. For this study, we looked at two ByteDance services: TikTok and its counterpart in China, Douyin. TikTok and Douyin are both short-form video sharing platforms that share most of the same key features. They target the U.S./international and Chinese markets, respectively.

Each service has a broad user base in its target market. TikTok said in August 2020 that it had about 100 million monthly and 50 million daily active U.S. users, up nearly 800% from January 2018. According to App Annie, a mobile data and analytics company, TikTok was the most downloaded app from iOS and Android app stores in 2020, ahead of Facebook, Instagram, and YouTube. Douyin hit 600 million daily active users as of August 2020, according to ByteDance.

Both Douyin and TikTok leverage the same surveillance capitalism principles of behavioral data collection and monetization that have exploded profits for Big Tech companies in the U.S. Both apps track everything from users’ locations to likes and follows to the amount of time they spend looking at specific videos in order to serve them “personalized” organic and sponsored content. Both apps also leverage the popularity of certain users (known as “creators”) to broker sponsorship deals with third-party companies that pay creators to promote their products or services to users of the app.

Our decision to evaluate ByteDance, a privately held company, marks a departure from our typical standard, which is to evaluate only publicly traded companies. As a privately held company, ByteDance has no mandate to disclose information about its corporate governance to the public, as required by major stock exchanges and regulators in most markets. This gives us fewer avenues for putting pressure on the company. Nevertheless, we believe that the large-scale human rights and public interest implications of ByteDance’s services and the exceptionally high degree of public attention on the company, due to its rapid growth and its symbolic value in the U.S.-China relations, merit our scrutiny.

TikTok has been in the political crossfire amid rising tensions between the U.S. and China, with policymakers worrying that Chinese authorities might have easy access to the data of TikTok users in the U.S. TikTok has publicly affirmed that U.S. user data is stored in the U.S. (with a backup in Singapore) and is at no risk of acquisition by the Chinese government, yet concerns about the data security of U.S. users have persisted among policymakers. Former U.S. president Donald Trump attempted to ban the app in an executive order in 2020 that was refuted by the courts and then officially reversed by President Joe Biden in June 2021. The Biden Administration put forth a new order that will set in motion “rigorous, evidence-based analysis” of certain software products owned by foreign adversaries, including China, “that may pose an unacceptable risk to U.S. national security.”

Although discussions about TikTok have been dominated by security-focused policy conversations and geopolitical concerns, particularly in the U.S. and India (where the app was banned in 2020), the service has unique qualities affecting freedom of expression and information in ways that differ from what we see on other popular platforms in the U.S. ByteDance is the first Chinese social media company to offer a social media service that is actively competing with the biggest U.S. platforms, like YouTube and Instagram. The fact that TikTok is owned by a Chinese company is important not just from a privacy standpoint, but from a content governance perspective as well.

Many experts argue that algorithmic recommendation is the main driver of the popularity of both TikTok and Douyin. For this study, we wanted to further examine this theory and assess the companies’ stance on freedom of expression, alongside privacy and security. We sought to understand the implications of the Chinese ownership of two twin services with very different target markets, demonstrate the impact of different legal and political environments on the policies and practices of these twin services, and see how they affect users’ human rights.

How did we do the research?

For this study we looked at the policies of Douyin in China, TikTok in the U.S., and parent company ByteDance. Although TikTok operates internationally and has different policies for various geographic areas, we elected to focus on its policies for the U.S. for two reasons. First, the U.S. is TikTok’s flagship overseas market, with 100 million active monthly users, and a growing group of stakeholders are investigating the platform’s policies and practices. Second, we wanted to be able to compare our findings for TikTok, and our findings for major U.S.-based social media services like Instagram and YouTube from the 2020 RDR Index, where we evaluated platforms’ policies in their home markets only.

We selected 39 of our indicators (out of the full list of 58) that would best measure the most prominent human rights risks for users of either service. Since we picked two services that pose a number of human rights risks stemming from their business models and heavy use of algorithms, we included our indicators on targeted advertising, algorithmic systems, and content governance. We also sought an empirical basis for the national security and privacy concerns that governments and the media have come to associate with TikTok. Therefore, we included our indicators assessing transparency around privacy, information security, and government demands to access user information.

We reviewed the public documents disclosed by the company, including policies provided to users and business partners, company blog posts, and reports against the criteria of each element contained in the 39 indicators selected. Each indicator comprises a set of questions (what we call “elements”) about the company’s policy or practice in a specific area. We give each service one of three possible scores for each element: “full credit” (100), “partial credit” (50), and “no disclosure found” (0) or “no credit” (0). Each service receives a per-indicator score reflecting the mean value of all elements in the indicator. Learn more about our methodology.

Alongside our indicator-based evaluation of ByteDance and its video-sharing services, we reviewed independent research of TikTok by the Citizen Lab and the Mozilla Foundation. We also reviewed independent media coverage and commentary about the company and a series of leaked internal documents from TikTok that sparked investigations by The Guardian, The Intercept, and German digital rights blog Netzpolitik.

Our research findings

We rank companies on their digital rights governance, and on their policies and practices affecting freedom of expression and privacy. Our findings are organized by these categories below. In certain cases, we compare our findings for TikTok and Douyin with our data for Instagram, from the 2020 RDR Index. Our primary objective here is to give readers an idea of how TikTok compares to one of its most prominent U.S.-based competitors.

Digital rights governance

In contrast to other large multinational tech companies, ByteDance offers very little public documentation of governance policies or practices that affect people’s rights to free expression and privacy. TikTok has distinguished itself from its parent company in some policy areas that directly affect users’ rights, by doing things like publishing transparency reports, but overall, the platform does not make an explicit commitment to human rights, or conduct human rights due diligence, in accordance with our standards.

Values represent combined average indicator scores for each issue area. See appendix for more.

Neither ByteDance, nor TikTok, nor Douyin pledged to protect privacy or freedom of expression as defined by human rights law (G1), nor did either service conduct human rights impact assessments, a key tool for companies seeking to prevent their products and services from causing human rights harms (G4). This is typical of Chinese social platforms ranked by RDR, but it puts TikTok behind major U.S. peers such as Instagram (owned by Facebook), which conducts human rights impact assessments in some key areas, including its processes for policy enforcement and its approach to government regulations and policies that affect freedom of expression and information and privacy.

Content governance

User content rules/governance and enforcement
Indicators G6b, F1a, F3a, F4a, F4b

Both services provided public content rules that were easy to find and understand (F1a), though Douyin was slightly more detailed in explaining the circumstances under which it may restrict content or user accounts (F3a) and appeared to offer a more comprehensive system for users to appeal moderation decisions (G6b). Leaks have revealed that TikTok also maintains more detailed internal rules that are not visible to the public. TikTok reported more data than Douyin about the nature and volume of its enforcement actions (F4a, F4b), roughly on par with Instagram.

Values represent combined average indicator scores for each issue area. See appendix for more.

Ad content and targeting rules and enforcement
Indicators F1b, F1c, F3b, F3c, F4c

Advertising is the primary source of revenue for both ByteDance services, similar to other major platforms in China and the U.S. Whereas Douyin’s advertising policies were jumbled and hard to find, TikTok was more transparent about its ad policies and enforcement actions, narrowly surpassing Instagram’s score on this metric in the 2020 RDR Index. A Mozilla study found that the company did not fully enforce its advertising policies when it came to content sponsored (i.e., paid for) by third parties shared by TikTok influencers, a misstep for which Instagram also has been criticized. Douyin failed to provide any data about the volume and nature of its enforcement of ad content policies (F4c).

Algorithms, bots
Indicators F1d, F12, F13

Like most companies, neither service provided comprehensive rules governing their use of algorithmic systems (F1d). However, both services offered disclosures describing their algorithmic curation processes (F12), and TikTok published a dedicated document for this purpose, which scored better than any other service ranked by RDR in 2020, including YouTube and Instagram. The document explains design considerations and some of the elements of user behavior that influence the algorithm, but it is far from comprehensive. Though ByteDance’s public materials do not mention this, leaked internal documents have shown that the algorithm also takes input from TikTok staff, who assign content to different levels of algorithmic amplification. Although we were not able to find similar information about Douyin’s practices, the general similarity of the services suggests this takes place on that platform as well, and Chinese blogs have discussed the existence of such a process.

Government demands to censor content
Indicators F5a, F6, F8

Along with government demands to access user information, government censorship demands are where we see the starkest difference between ByteDance’s two services. Unsurprisingly, this reflects China’s unique political and legal environment. Douyin discloses almost no information about its processes or data related to such demands, though a former ByteDance employee claimed they receive up to 100 per day. While it is not as clear and thorough in its disclosures as competitors such as Instagram, TikTok does regularly report on such demands, and offers this data broken out by country of origin.

Values represent combined average indicator scores for each issue area. See appendix for more.

 

Privacy and security

Government demands to access user information
Indicators P10a, P11a, P12

Only TikTok offered meaningful disclosure in this area. Its biannual transparency reports break out government demands for user data by country, though it is worth noting that these reports do not mention any data requests from the government of China. Douyin offers no such information. Although there are no laws or regulations in China prohibiting Chinese companies from releasing data about government demands to access user information, the political and legal environment discourages companies from doing so.

User information
Indicators P1a, P1b, P3-P9

Our data highlights the contrast between legal regimes for user data protection in China, which covers these areas with its 2017 Cybersecurity Law and a pending data protection law, and the U.S., which has no comprehensive data protection law. Douyin outperformed TikTok on our indicators for its clearer and more comprehensive disclosures of what information it collects (P3a), infers (P3b), and retains (P6), as well as its purposes for doing so (P5). Unlike TikTok, Douyin pledged to collect only data that is reasonably necessary for its functionality, as required by Chinese law, but it has been reprimanded for poor compliance with these requirements. A technical analysis by the Citizen Lab found no discrepancies between what the two apps’ privacy policies say and what information their systems actually collect. Despite its overall advantage in this area, Douyin provided fewer options for users to access (P8) or control the use of (P7) their information than TikTok.

Values represent combined average indicator scores for each issue area. See appendix for more.

Security
Indicators P13-P17

While TikTok has no published policy regarding data breaches, Douyin received a perfect score for pledging to notify users and help them navigate the consequences of such information leaks (P15), in accordance with China’s cybersecurity law. Nevertheless, TikTok outperformed Douyin on security-related indicators, largely because it offered multi-factor authentication to protect users’ accounts (P17) and made it much easier for external researchers to submit reports of security vulnerabilities (P14). Douyin has a bug-bounty program, but does not provide multi-factor authentication.

 

Download the complete data set, or get in touch!

We invite you to download our full dataset [.XLSX  / .CSV] and find your own insights! This includes extensive excerpts from the two services’ public disclosures, analysis of their alignment with RDR’s rigorous human rights indicators, and a complete list of our sources. Contact us at info [at] rankingdigitalrights.org with questions about the analysis or data collection.

 

APPENDICES

Appendix A: Our indicators

For this study, we selected 39 of our indicators (from the full list of 58) that would best measure the most prominent human rights risks for users of either service.

G: Digital rights governance

F: Freedom of expression

P: Privacy

 

Appendix B: Indicator Groups

Each of our charts shows aggregate scores for indicator groups listed below. Each aggregate score represents the average of scores for each indicator in the group.

C. Sources list

In addition to conducting our own research, drawing on policies and other documents published by ByteDance, Douyin, and TikTok, we also relied on the work of other organizations that have studied and investigated TikTok and Douyin.

On July 1, 2021, RDR Senior Policy and Partnerships Manager Nathalie Maréchal testified before the United States International Trade Commission in the context of its investigation into foreign censorship policies and practices affecting US companies. The investigation was initiated in response to a request from the US Senate Finance Committee concerning censorship as a non-tariff barrier to trade. Below is her written testimony.

Good morning and thank you for inviting me to testify. I am Nathalie Maréchal, Senior Policy & Partnerships Manager at Ranking Digital Rights (RDR). Previously, I was a doctoral fellow at the University of Southern California, where I researched the rise of digital authoritarianism, the transnational social movement for digital rights, and the role of the U.S. Internet Freedom Agenda in advancing freedom of expression, privacy, and other human rights around the world. 

RDR is an independent research program housed at the New America think tank. RDR works to promote freedom of expression and privacy on the internet by ranking the world’s most powerful digital platforms and telecommunications companies on international human rights standards. Our Corporate Accountability Index evaluates 26 publicly-traded digital platforms and telecom companies headquartered in 12 countries. Among them are the U.S. “Big Tech” giants: Apple, Facebook, Google, and Microsoft, but also some of the largest companies in China, such as Baidu and Tencent. All told, these companies hold a combined market capitalization of more than USD $11 trillion. Their products and services affect a majority of the world’s 4.6 billion internet users.

At RDR, we believe that companies should build in respect for human rights throughout their value chain. They should be transparent about their commitments, policies, and practices so their users and their communities can hold them accountable when they fall short. Foreign censorship impedes them from doing this by requiring them to participate in human rights violations and limiting what they can disclose about their own operations. This is not a new problem: the first Congressional hearing on the topic took place in 2007, after Yahoo! turned over the email accounts of two democracy activists to the Chinese government. But it is a problem that grows more urgent every year, as more and more social, political and economic activity is mediated through internet companies—especially in the pandemic context—and governments develop new strategies and tactics to control the flow of information online, with grave consequences for democracy and human rights—and trade. The U.S. government and American companies must play a leading role in ensuring that all human rights, including freedom of expression and information, are respected online as well as offline. 

Governments use strategies—known as information controls—that go beyond simply suppressing speech in order to control public discourse and thus manipulate domestic and foreign populations, often with the consequence or even the aim of violating human rights. Information controls comprise “techniques, practices, regulations or policies that strongly influence the availability of electronic information for social, political, ethical, or economic ends.” All of these strategies have implications for U.S. companies’ ability to enter and compete in foreign markets and constitute non-tariff barriers to trade. They make it more expensive for American companies to respect human rights, and can result in companies adopting policies and practices that directly undermine U.S. foreign policy priorities. 

Freedom of expression and information as an international human right

On June 16, the 10th anniversary of the UN Guiding Principles on Business and Human Rights (UNGPs), Secretary of State Anthony Blinken renewed the United States’ commitment to advancing business and human rights under the framework set out in the UNGPs, which says: 1) States have the duty to protect human rights; 2) businesses have a responsibility to respect human rights; and 3) victims affected by business-related human rights issues should have access to remedy. 

The cooperation of private companies like internet service providers (ISPs), telecom operators and over-the-top (OTT) intermediaries like social networking sites and messaging apps is almost always required for information controls to be effective. And given the leading role that American companies have played in the growth of the global internet, this means that American companies are often implicated. 

But again, American companies doing business in foreign markets have a responsibility to respect freedom of expression and information even when national governments fail to do so themselves. Of course, they also have the responsibility to do this within our borders, though I recognize that is not the focus of this hearing.

Information controls: Policies and Practices

Today I will talk about four broad information control strategies: technical barriers to access; content removals within social media platforms; measures intended to cause chilling effects or self-censorship; and online influence campaigns.

The most blatant technical barriers to access are:

  • Network shutdowns and disruptions: Governments frequently order ISPs and mobile operators to shut down network access in specific areas, often coinciding with political events like elections, protests, and armed conflict. They may also demand that companies filter the specific protocols associated with VoIP calls or even individual messaging services like WhatsApp. The companies that produce the hardware and software required for network operations are under pressure to build these capabilities into their products.

  • A more precise technical approach is to block specific web services, sites and pages: These measures prevent the population from accessing forbidden content online, essentially aiming to transpose national boundaries from the physical world into cyberspace. China’s “Great Firewall,” which prevents internet users in mainland China from accessing a broad range of foreign websites is a classic example.

The second strategy is to restrict content within social media platforms, which can be done in a number of ways:

    • Many countries prohibit specific types of expression, thus creating legal requirements for OTT services to moderate user content according to local law. For example, Thailand prohibits insulting the king and his family; Russia forbids so-called “LGBT propaganda”; in Turkey it is a crime to “insult the nation.” Internet companies that operate in those markets are often required to proactively identify and restrict such content, either by removing it altogether or by restricting access to it within the country in question. When they do so, they are in effect acting as censors on behalf of the local government. However, companies struggle to identify and restrict all instances of potentially rule-breaking content without also censoring legal speech.
    • Authorities can issue legal requests to take down or geographically restrict specific user accounts or pieces of content. Many platforms will only consider demands sent by a court or other judicial authority within a proper legal framework, and are publicly committed to pushing back against illegal or overly broad requests.
    • Some countries, including China, hold internet intermediaries like social media platforms legally responsible for their users’ illegal speech or content. These intermediary liability regimes incentivize companies to aggressively moderate content using a combination of AI tools and human labor that often results in false positives.
  • Governments also abuse companies’ own content moderation processes. Most social media platforms’ user content rules prohibit types of expression that are legal under national law but that governments may nevertheless want to restrict, like representations of groups designated as terrorist organizations. Governments can report such content to companies through user reporting or “flagging” mechanisms in order to have the content restricted outside of any legal process.
  • Secret or informal relationships with companies are by definition, hard to detect, but journalists have found evidence suggesting that senior social media company employees maintain relationships with high-ranking government officials or their political parties. This can lead to content moderation decisions that benefit the government or political party in question. 

The third strategy is to create chilling effects or a culture of self-censorship: Academic research has demonstrated that people self-censor when they know or suspect they are under surveillance, and may face repercussions for their online expression or activity. Specific policies and practices governments take to produce chilling effects include intermediary liability regimes, and

  • Engaging in targeted surveillance of activists and civil society groups who oppose authoritarian governments.
    • Banning end-to-end encryption used in secure messaging tools or requiring the use of “responsible encryption” exposes internet users to surveillance risks and repercussions for their online speech.
  • “Real name” policies and ID requirements that force users to register their SIM cards with the authorities, provide proof of identity when using an internet cafe, and link their online activities to their “real name” make anonymous speech impossible, creating “chilling effects” that inhibit the expression and even the consumption of controversial online content.
  • Data localization requirements can also create chilling effects. Since the 2013 Snowden revelations, many governments now require that data about their citizens be stored within their borders, ostensibly to protect the data from U.S. intelligence. However, in many cases the real effect of data localization is to make the data easier to access for domestic intelligence and law enforcement.

The fourth information control strategy is online influence campaigns. Governments increasingly seek to control public opinion not by preventing the production and dissemination of information they dislike, but by denying it the public’s attention by flooding the public sphere with false, misleading, or distracting information: this is censorship by “distributed denial of attention.” The spread of these tactics has led to the current misinformation and disinformation crisis. In response to this crisis, a wide range of actors, including governments and civil society organizations, have called on companies to adopt and enforce stricter rules against mis- and disinformation on their platforms. As with other types of potentially harmful content, company efforts to restrict influence operations can result in collateral censorship of legitimate expression that is protected under international human rights law.

Limiting companies’ ability to enforce their own content rules is the next frontier in information controls. When companies crack down on hate speech, incitement and disinformation, they sometimes limit or censor the speech of government actors or political parties. Last month, Twitter removed a tweet from the official account of Nigeria’s president that contained a veiled threat against Igbo people, who represent the third largest ethnic group in the country. The next day, Twitter was blocked nationwide and officials threatened to arrest anyone using the service via VPN. This has created serious consequences for Twitter, and has also left people in Nigeria—the largest country in Africa, with an estimated 40 million Twitter users—unable to use the service.

In conclusion: digital authoritarians aim to structure the information environment in ways that are beneficial to their own strategic narratives, and detrimental to discourse that challenges them. By addressing the negative effects of foreign censorship on U.S. companies, we will enable those companies to do a better job of upholding their human rights obligations and setting an example for companies around the world.

Thank you again for the opportunity to testify today. I look forward to your questions.

Twitter under pressure. Photo by Quinn Dombrowski (CC-BY-SA-2.0)

Twitter under pressure. Photo by Quinn Dombrowski (CC-BY-SA-2.0)

This is the RADAR, Ranking Digital Rights’ newsletter. This special edition was sent on Jun 17, 2021. Subscribe here to get The RADAR by email.

US-based tech companies have long vowed to enforce their own content and data-collection rules, while also following the “law of the land” in every country where they operate. With the exceptions of countries that block their services altogether, this approach has allowed tech giants to remain accessible in most places, most of the time. But with powerful political actors worldwide increasingly using social media platforms to disseminate their views (whether fact or fiction) and occasionally running afoul of platform rules, this delicate balance of interests is being tested.

Recent events in both India and Nigeria have shown how company efforts to reduce disinformation and harmful speech on their platforms—paired with their relative lack of transparency in showing how they enforce content rules—can put their business operations at risk.

India’s new IT Rules, passed in February 2021, require “significant social media intermediaries” like Google and Facebook to introduce new measures on issues ranging from end-to-end encryption to locally-staffed grievance programs for content removal and related disputes.

While Facebook and Google both took measures to comply with the new rules before they went into force, Twitter held out. But after Indian security authorities came to Twitter’s offices in Delhi and Gurgaon, seeking to investigate Twitter’s decision to label a tweet (sent by a BJP party official) as “manipulated media,” the company vowed at the eleventh hour that it would “strive to comply with applicable law in India.” Read the Software Freedom Law Centre’s analysis of the 2021 IT Rules.

Meanwhile in Nigeria, Twitter removed a tweet from the official account of President Muhammadu Buhari that contained a veiled threat against Igbo people, who represent the third largest ethnic group in the country. The next day, Twitter was blocked nationwide and officials threatened to arrest anyone using the service via VPN.

Our colleagues at Paradigm Initiative have called out their government for violating Nigerians’ rights to freedom of expression, which are protected under both local and international law. They also have vowed to keep tweeting.

In both contexts, these dynamics beg critical questions: To what extent do companies actually follow local laws? And to what extent are they enforcing their own policies, in every country where they operate? Without robust transparency mechanisms, it’s hard to know.

Year after year, our research has shown that all the companies in the RDR Index have significant work to do when it comes to detailed transparency reporting on content policy enforcement and building robust remedy and appeal mechanisms for people wishing to contest a company’s decisions. Visit the 2020 RDR Index site to see our most recent findings.

TikTok is on the tightrope too—and we’re checking it out

In the US last week, President Biden revoked three Trump-era executive orders that attempted to ban both TikTok and WeChat (among other services) on national security grounds. Biden then introduced a new order that will set in motion a “rigorous, evidence-based analysis” of certain software products owned by foreign adversaries “that may pose an unacceptable risk to U.S. national security.”

The move puts TikTok back in the spotlight in Washington, and re-raises questions about whether the service’s 100+ million users in the US should be concerned about the company’s data privacy and security practices, given that its roots are in China.

This past spring, we’ve been digging into TikTok’s policies and practices in an effort to answer some of these questions, and explore others. Stay tuned for our findings, out in just a few weeks! If you’re curious to learn more, or to share your own insights on the company, get in touch.

Hey Big Tech, don’t leave us hanging

What’s one thing each company can do to improve its human rights record? In mid-May, we launched our annual joint campaign with Access Now and the Business and Human Rights Resource Center to pressure each of the 26 companies in the RDR Index to make one, single change to their policies or practices. Read our recommendations

So far, only three companies have responded: Kakao, Vodafone, and Verizon Media (owner of Yahoo mail). For the 23 other companies we wrote to, we can only assume that they’ve decided users’ human rights are not a high priority. We hope they’ll prove us wrong in the days to come.

Other campaigns we’re supporting:

  • The Electronic Privacy Information Center is urging the Biden Administration to ensure that any new transatlantic data transfer deal will be coupled with legislation that reforms government surveillance practices and guarantees privacy. Read the letter.
  • The Electronic Frontier Foundation is calling on Venmo and Paypal to publish regular transparency reports and provide meaningful notice to users before restricting their accounts. Read the letter.
  • We also joined 175 civil society organizations, technologists, and experts to call for a global ban on biometric recognition technologies that can be used to identify and track people worldwide.

Celebrate Pride month with GLAAD’s Social Media Safety Index

We are proud to promote GLAAD’s first-ever baseline evaluation of the LGBTQ user safety experience across the social media landscape. This comprehensive new report compares key safety and security features of Facebook, Instagram, Twitter, YouTube, and TikTok, drawing heavily on our indicators and 2020 RDR Index findings. Read the report.

RDR media hits

Politico: RDR Projects Director Ellery Biddle was quoted in a story about Google’s new agreement to build a cloud region in Israel, and how this might affect people’s rights, particularly Palestinians and Arab Israelis. “We’re pushing [Google] to be as transparent as possible about what they’re agreeing to do, how they’re going to treat peoples’ data, [and] what kinds of protections and due process mechanisms are in place to protect peoples’ privacy,” she said. Read via Politico

Digital Privacy News: Commenting on Google’s new approach to user data collection, RDR Senior Policy and Partnerships Manager Nathalie Maréchal said, “[Tech companies] have been making unfathomable amounts of money from monetizing data about internet users and selling advertising against it…They see that regulation is coming — and they’re hoping that by reforming their business a little bit, they’ll be able to stave off the threat of regulation.” Read via Digital Privacy News

Deutsche Welle Akademie: RDR Projects Director Ellery Biddle was featured in a podcast of Deutsche Welle Akademie’s Media and Information Literacy Expert Network (MILEN), speaking about digital censorship of independent and critical voices on major social media platforms. Listen via Spotify, Apple podcasts, Buzzsprout.

 

 

Put us on your radar! Subscribe to The RADAR to receive our newsletter by email.

Human rights defenders document protests in Medellín, Colombia. Photo by Humano Salvaje (CC-BY-SA-2.0)

Human rights defenders document protests in Medellín, Colombia. Photo by Humano Salvaje (CC-BY-SA-2.0)

This is the RADAR, Ranking Digital Rights’ newsletter. This special edition was sent on May 26, 2021. Subscribe here to get The RADAR by email.

It was not just a glitch. In recent weeks, incidents of censorship on Facebook and Instagram spiked and brought frustration to activists and journalists working to document protests in Colombia, violence in occupied Palestine, and the public health crisis in India. Similar patterns have emerged for Twitter users in the latter two locales. Facebook also removed a swathe of posts referencing the Al-Aqsa mosque in the Old City of Jerusalem. Its systems then added insult to injury, notifying users that posts were removed because they were associated with “violence or dangerous organizations.

RDR joined statements issued by SMEX and ARTICLE 19 denouncing this wave of censorship, demanding that the companies be more transparent about how and why they carry out these types of removals, and offering recommendations drawing on RDR’s corporate accountability standards.

While some content has been restored, many questions remain. Spokespeople for both companies have attributed the problems to technology, not human decision-making, using terms like “technical error,” “glitch,” and “.” A tweet from Instagram Commsexplained that “…it took us such a long time to figure out what was taking place…because this [was] an automated deployment…” In other words, a machine did it.

What kinds of machines have such supreme decision-making powers that they can cause thousands of pieces of evidence of human rights violations to disappear from a platform, without any human involvement? We know that algorithmic systems lie at the center of this story, but we don’t know much more.

In the 2020 RDR Index, we showed how some of the world’s most powerful tech companies offer no actionable public information about how their algorithms are built, and how they’re meant to work, and pointed to some examples of just how harmful this can be for people’s rights. This major wave of recent takedowns proves our point.

In the coming months, we will release a bite-sized report on one of everyone’s favorite new companies to scrutinize: ByteDance! This will mark the launch of an expanded research agenda at RDR, where we’ll be using new methods to investigate and report on decisions made by algorithms.

Google pipes data center. Photo by Jorge Jorquera (CC BY-NC-ND 2.0)

Google pipes data center. Photo by Jorge Jorquera (CC BY-NC-ND 2.0)

Putting a check on Big Tech: Access Now letter campaign

What can companies do to prevent such consequences in the future and improve their human rights records? We’ve got answers to this question, for every company we rank! Last week, we launched our annual joint campaign with Access Now and the Business & Human Rights Resource Centre to pressure each of the 26 companies in the RDR Index to make one, single change to their policies or practices.

Read our recommendations

The new oil: Google’s data center deal with Saudi Arabia
Alongside Human Rights Watch, SMEX, and other NGO partners, RDR co-authored a public letter asking Google to go on the record about a pending agreement to build a massive new data center in Saudi Arabia.

The center will store troves of personal data belonging to organizations in the Kingdom and throughout the Arab region, including media and human rights groups. We are pushing Google to publish evidence that it has carried out sufficient human rights due diligence on this deal, given Saudi Arabia’s notorious use of digital tools to spy on and persecute critics like Jamal Khashoggi. Evidence has shown that the late Washington Post contributor was heavily surveilled by Saudi authorities prior to his 2018 assassination at the Saudi embassy in Istanbul.

Farewell, Rebecca!

It is with heavy hearts that we bid farewell to our founder, Rebecca MacKinnon, who will conclude her work with RDR at the end of this month. A leading advocate for freedom of expression and privacy online since 2004, Rebecca conceived and established RDR in 2013 and led our program until 2019, when she elected to step back and seek new leadership for the organization. She found this in Jessica Dheere, who became our director in September 2020.

Read our tribute post →

RDR media hits

Columbia Journalism Review: RDR Senior Policy Analyst Nathalie Maréchal contributed to a CJR Galley discussion about the Facebook Oversight Board, alongside legal scholars Kate Klonick and Evelyn Douek, journalist Alan Rusbridger, and CJR’s Mathew Ingram. “I do not think it is possible to adequately govern online expression without examining the business model,” argued Maréchal. Read via Columbia Journalism Review

Slate: Rebecca MacKinnon wrote an op-ed arguing that Facebook’s biggest problem lies in its actual board of directors, not the fanfared Oversight Board. This week, Facebook shareholders will vote on a proposal urging Zuckerberg to relinquish his seat as chair of the board to an independent chair. “Ready or not, Facebook might then have to experience real oversight by its actual governing board,” she wrote. Read via Slate

Consumer Reports: New research shows that advertisers have been targeting teenage Facebook users to promote subjects that the company explicitly prohibits in their own advertising policy, including gambling and eating disorders. “Enforcing its own rules for advertising is the bare minimum Facebook should be doing,” said RDR’s Nathalie Maréchal, speaking with CR author Kaveh Waddell. Read via Consumer Reports

Drzavljan D: RDR Research Director Amy Brouillette spoke on Drzavljan D, a podcast focusing on information society and the media. “Companies have had a longer history of interacting with regulators and not really confronting demands from the public, particularly telecommunications campaigns,” said Brouillette. Listen via Drzavljan D

Where to find us

RightsCon 2021 | Maximizing company transparency in the majority world
June 11 at 8:30 AM ET | Register here
How can tech companies break down barriers to transparency and how can the human rights community motivate them to do so? Join RDR Company Engagement Lead Jan Rydzak alongside company representatives, civil society members, investors, and researchers to identify strategies to hold companies accountable to users.

Global Solutions Summit | Liberal discourse and values on the internet
May 27 at 16:00 CET/10:00 AM ET | Register here
Rebecca MacKinnon will join a panel at the Global Solutions Summit, an event hosted in cooperation with the German Federal Ministry of Justice and Consumer Protection. Fellow panelists will include German Minister Christine Lambrecht and European Commission Vice President Věra Jourová.

 

Put us on your radar! Subscribe to The RADAR to receive our newsletter by email.

Rebecca MacKinnon at the Internet Governance Forum in 2015. Photo by Steffen Leidel.

It is with heavy hearts that we bid farewell to our founder, Rebecca MacKinnon, who will conclude her work with RDR at the end of this month. Rebecca conceived and founded RDR in 2013 and ran it until she passed the baton to Jessica Dheere, who became RDR’s director in September 2020.

Rebecca has been a leading advocate for freedom of expression and privacy online since 2004. A former CNN bureau chief in Beijing and Tokyo, she is co-founder of Global Voices, a founding member of the Global Network Initiative, and has held fellowships at Harvard, Princeton, New America, and the University of California.

Rebecca’s 2012 book, Consent of the Networked: The Worldwide Struggle for Internet Freedom, was described by writer and tech activist Cory Doctorow as “an absolutely indispensable account of the way that technology both serves freedom and removes it.” In a sharp departure from popular “liberation technology” narratives of the time, Rebecca delivered an early warning that unaccountable tech company practices posed a threat to the future of democracy and human rights. A winner of the Goldsmith Book Prize, Consent of the Networked has become a seminal text in defining how we think about human rights in the digital age.

But for some readers, Consent of the Networked also begged the question: “So you’ve written this book about the problem, now what are you going to do about it?”

Ranking Digital Rights was her answer. A handful of companies had voluntarily joined the Global Network Initiative (which she also helped launch in 2008) and committed to basic due diligence and transparency standards in response to government censorship and surveillance demands. But while the GNI has set standards for industry best practices in dealing with government demands, many of the world’s most powerful tech giants have yet to join. What’s more, the GNI does not address a wide range of human rights implications stemming from companies’ business models, design choices, and other commercial practices. She concluded that a systematic, global, regularly updated ranking was needed, modeled after emerging benchmarks of other companies and industries on environmental sustainability, labor practices, and political donations.

In a 2019 Medium post about RDR’s purpose, Rebecca wrote:

“…the need to hold companies accountable is more pressing than ever. People have the right to know — and companies have a responsibility to show — how our freedom of expression and and privacy are affected by the internet platforms and services we increasingly depend on. The RDR Index data can be used by civil society advocates, investors, policymakers, and companies themselves to identify where specific companies fall short in protecting users’ rights and how they can improve. It can also be used as a tool to show where law and regulation need to be improved or reformed.”

A September 2013 progress report offers an eye-opening reminder of what it took to implement that idea—and how far RDR has come. Rebecca was RDR’s only full-time employee that year, working in collaboration with a range of research partners, interns, fellows, and contractors to develop our initial pilot methodology. The inaugural 2015 RDR Corporate Accountability Index was launched with only two additional full-time staff, in partnership with the ESG research firm Sustainalytics.

By 2020 RDR’s staff had expanded to over a dozen people, thanks to the generous support of a growing group of funders and partners. Today, the RDR Index offers the only year-on-year ranking of the world’s most powerful digital platforms and telecommunications companies on policies and practices affecting users’ human rights. The RDR Index has become a widely recognized global standard for corporate accountability in the tech sector, and a key resource for policymakers, investors, and civil society organizations advocating in the field. 

Among digital rights advocates, Rebecca needs no introduction. This is not only because of her intellectual leadership and tireless efforts to hold tech companies accountable to the public. It is also because of her strengths as a builder of networks and a mentor of new voices and advocates in our field. On a personal note, as a former member of Global Voices’ core staff and a current member of RDR’s leadership, I can’t imagine how my career and my understanding of the world would have taken shape without Rebecca’s contributions and guidance. And I know that I am in good company.

Rebecca, we wish you all the best in your future work. And we can’t wait to raise a glass to you in person, some day in the not-too-distant future.