Testimony to the U.S. International Trade Commission on foreign censorship and human rights obligations for US-based companies

Share Article

On July 1, 2021, RDR Senior Policy and Partnerships Manager Nathalie Maréchal testified before the United States International Trade Commission in the context of its investigation into foreign censorship policies and practices affecting US companies. The investigation was initiated in response to a request from the US Senate Finance Committee concerning censorship as a non-tariff barrier to trade. Below is her written testimony.

Good morning and thank you for inviting me to testify. I am Nathalie Maréchal, Senior Policy & Partnerships Manager at Ranking Digital Rights (RDR). Previously, I was a doctoral fellow at the University of Southern California, where I researched the rise of digital authoritarianism, the transnational social movement for digital rights, and the role of the U.S. Internet Freedom Agenda in advancing freedom of expression, privacy, and other human rights around the world. 

RDR is an independent research program housed at the New America think tank. RDR works to promote freedom of expression and privacy on the internet by ranking the world’s most powerful digital platforms and telecommunications companies on international human rights standards. Our Corporate Accountability Index evaluates 26 publicly-traded digital platforms and telecom companies headquartered in 12 countries. Among them are the U.S. “Big Tech” giants: Apple, Facebook, Google, and Microsoft, but also some of the largest companies in China, such as Baidu and Tencent. All told, these companies hold a combined market capitalization of more than USD $11 trillion. Their products and services affect a majority of the world’s 4.6 billion internet users.

At RDR, we believe that companies should build in respect for human rights throughout their value chain. They should be transparent about their commitments, policies, and practices so their users and their communities can hold them accountable when they fall short. Foreign censorship impedes them from doing this by requiring them to participate in human rights violations and limiting what they can disclose about their own operations. This is not a new problem: the first Congressional hearing on the topic took place in 2007, after Yahoo! turned over the email accounts of two democracy activists to the Chinese government. But it is a problem that grows more urgent every year, as more and more social, political and economic activity is mediated through internet companies—especially in the pandemic context—and governments develop new strategies and tactics to control the flow of information online, with grave consequences for democracy and human rights—and trade. The U.S. government and American companies must play a leading role in ensuring that all human rights, including freedom of expression and information, are respected online as well as offline. 

Governments use strategies—known as information controls—that go beyond simply suppressing speech in order to control public discourse and thus manipulate domestic and foreign populations, often with the consequence or even the aim of violating human rights. Information controls comprise “techniques, practices, regulations or policies that strongly influence the availability of electronic information for social, political, ethical, or economic ends.” All of these strategies have implications for U.S. companies’ ability to enter and compete in foreign markets and constitute non-tariff barriers to trade. They make it more expensive for American companies to respect human rights, and can result in companies adopting policies and practices that directly undermine U.S. foreign policy priorities. 

Freedom of expression and information as an international human right

On June 16, the 10th anniversary of the UN Guiding Principles on Business and Human Rights (UNGPs), Secretary of State Anthony Blinken renewed the United States’ commitment to advancing business and human rights under the framework set out in the UNGPs, which says: 1) States have the duty to protect human rights; 2) businesses have a responsibility to respect human rights; and 3) victims affected by business-related human rights issues should have access to remedy. 

The cooperation of private companies like internet service providers (ISPs), telecom operators and over-the-top (OTT) intermediaries like social networking sites and messaging apps is almost always required for information controls to be effective. And given the leading role that American companies have played in the growth of the global internet, this means that American companies are often implicated. 

But again, American companies doing business in foreign markets have a responsibility to respect freedom of expression and information even when national governments fail to do so themselves. Of course, they also have the responsibility to do this within our borders, though I recognize that is not the focus of this hearing.

Information controls: Policies and Practices

Today I will talk about four broad information control strategies: technical barriers to access; content removals within social media platforms; measures intended to cause chilling effects or self-censorship; and online influence campaigns.

The most blatant technical barriers to access are:

  • Network shutdowns and disruptions: Governments frequently order ISPs and mobile operators to shut down network access in specific areas, often coinciding with political events like elections, protests, and armed conflict. They may also demand that companies filter the specific protocols associated with VoIP calls or even individual messaging services like WhatsApp. The companies that produce the hardware and software required for network operations are under pressure to build these capabilities into their products.

  • A more precise technical approach is to block specific web services, sites and pages: These measures prevent the population from accessing forbidden content online, essentially aiming to transpose national boundaries from the physical world into cyberspace. China’s “Great Firewall,” which prevents internet users in mainland China from accessing a broad range of foreign websites is a classic example.

The second strategy is to restrict content within social media platforms, which can be done in a number of ways:

    • Many countries prohibit specific types of expression, thus creating legal requirements for OTT services to moderate user content according to local law. For example, Thailand prohibits insulting the king and his family; Russia forbids so-called “LGBT propaganda”; in Turkey it is a crime to “insult the nation.” Internet companies that operate in those markets are often required to proactively identify and restrict such content, either by removing it altogether or by restricting access to it within the country in question. When they do so, they are in effect acting as censors on behalf of the local government. However, companies struggle to identify and restrict all instances of potentially rule-breaking content without also censoring legal speech.
    • Authorities can issue legal requests to take down or geographically restrict specific user accounts or pieces of content. Many platforms will only consider demands sent by a court or other judicial authority within a proper legal framework, and are publicly committed to pushing back against illegal or overly broad requests.
    • Some countries, including China, hold internet intermediaries like social media platforms legally responsible for their users’ illegal speech or content. These intermediary liability regimes incentivize companies to aggressively moderate content using a combination of AI tools and human labor that often results in false positives.
  • Governments also abuse companies’ own content moderation processes. Most social media platforms’ user content rules prohibit types of expression that are legal under national law but that governments may nevertheless want to restrict, like representations of groups designated as terrorist organizations. Governments can report such content to companies through user reporting or “flagging” mechanisms in order to have the content restricted outside of any legal process.
  • Secret or informal relationships with companies are by definition, hard to detect, but journalists have found evidence suggesting that senior social media company employees maintain relationships with high-ranking government officials or their political parties. This can lead to content moderation decisions that benefit the government or political party in question. 

The third strategy is to create chilling effects or a culture of self-censorship: Academic research has demonstrated that people self-censor when they know or suspect they are under surveillance, and may face repercussions for their online expression or activity. Specific policies and practices governments take to produce chilling effects include intermediary liability regimes, and

  • Engaging in targeted surveillance of activists and civil society groups who oppose authoritarian governments.
    • Banning end-to-end encryption used in secure messaging tools or requiring the use of “responsible encryption” exposes internet users to surveillance risks and repercussions for their online speech.
  • “Real name” policies and ID requirements that force users to register their SIM cards with the authorities, provide proof of identity when using an internet cafe, and link their online activities to their “real name” make anonymous speech impossible, creating “chilling effects” that inhibit the expression and even the consumption of controversial online content.
  • Data localization requirements can also create chilling effects. Since the 2013 Snowden revelations, many governments now require that data about their citizens be stored within their borders, ostensibly to protect the data from U.S. intelligence. However, in many cases the real effect of data localization is to make the data easier to access for domestic intelligence and law enforcement.

The fourth information control strategy is online influence campaigns. Governments increasingly seek to control public opinion not by preventing the production and dissemination of information they dislike, but by denying it the public’s attention by flooding the public sphere with false, misleading, or distracting information: this is censorship by “distributed denial of attention.” The spread of these tactics has led to the current misinformation and disinformation crisis. In response to this crisis, a wide range of actors, including governments and civil society organizations, have called on companies to adopt and enforce stricter rules against mis- and disinformation on their platforms. As with other types of potentially harmful content, company efforts to restrict influence operations can result in collateral censorship of legitimate expression that is protected under international human rights law.

Limiting companies’ ability to enforce their own content rules is the next frontier in information controls. When companies crack down on hate speech, incitement and disinformation, they sometimes limit or censor the speech of government actors or political parties. Last month, Twitter removed a tweet from the official account of Nigeria’s president that contained a veiled threat against Igbo people, who represent the third largest ethnic group in the country. The next day, Twitter was blocked nationwide and officials threatened to arrest anyone using the service via VPN. This has created serious consequences for Twitter, and has also left people in Nigeria—the largest country in Africa, with an estimated 40 million Twitter users—unable to use the service.

In conclusion: digital authoritarians aim to structure the information environment in ways that are beneficial to their own strategic narratives, and detrimental to discourse that challenges them. By addressing the negative effects of foreign censorship on U.S. companies, we will enable those companies to do a better job of upholding their human rights obligations and setting an example for companies around the world.

Thank you again for the opportunity to testify today. I look forward to your questions.

Highlights

A decade of tech accountability in action

Over the last decade, Ranking Digital Rights has laid the bedrock for corporate accountability in the tech sector by demanding transparency from both Big Tech and Telco Giants.

RDR Series:
Red Card on Digital Rights

A story of control, censorship, and state surveillance during the FIFA World Cup in Qatar

Related Posts

Sign up for the RADAR

Subscribe to our newsletter to stay in touch!