This post is published as part of an editorial partnership between Global Voices and Ranking Digital Rights.

Raqqa, Syria in August 2017. Videos posted by media and rights groups of the war on Youtube started disappearing after the platform introduced a new AI targeting terrorist content. Image via Wikimedia Commons by Mahmoud Bali (VOA) [Public domain]

A new video on Orient News’ YouTube channel shows a scene that is all too familiar to its regular viewers. Staff at a surgical hospital in Syria’s Idlib province rush to operate on a man who has just been injured in an explosion. The camera pans downward and shows three bodies on the floor. One lies motionless. The other two are covered with blankets. A man bends over and peers under the blanket, perhaps to see if he knows the victim.

Syrian media outlet Orient News is one of several smaller media outlets that has played a critical role in documenting Syria’s civil war and putting video evidence of violence against civilians into the public eye. Active since 2008, the group is owned and operated by a vocal critic of the Assad regime.

Alongside their own distribution channels, YouTube has been an instrumental vehicle for bringing videos like this one to a wider audience. Or at least it was, until August 2017 when, without warning, Orient News’ YouTube channel was suspended.

After some inquiry by the group, alongside other small media outlets including Bellingcat, Middle East Eye and the Syrian Archive — all of whom also saw some of their videos disappear — it came to light that YouTube had taken down hundreds of videos that appeared to include “extremist” content.

But these groups were puzzled. They had been posting their videos, which typically include captions and contextual details, for years. Why were they suddenly seen as unsafe for YouTube’s massive user base?

Because there was a new kind of authority calling the shots.

Just before the mysterious removals, YouTube announced its deployment of artificial intelligence technology to identify and censor “graphic or extremist content,” in order to crack down on ISIS and similar groups that have used social media (including YouTube, Twitter and the now defunct Google Plus) to post gruesome footage of executions and to recruit fighters.

Thousands of videos documenting war crimes and human rights violations were swept up and censored in this AI-powered purge. After the groups questioned YouTube about the move, the company admitted that it made the “wrong call’’ on several videos, which were reinstated thereafter. Others remained under a ban, due to “violent and graphic content.”

YouTube’s hasty removal of these videos highlights the problems of using automated tools to flag and remove materials — and why platforms need to be more transparent about their processes for policing content. Even when platforms like YouTube, Facebook, Instagram, and Twitter are clear about what types of content are banned, few provide clear information about what content they remove and why. This makes it difficult for users to understand why content has been removed and how to seek remedy when their rights are violated.

The myth of self-regulation

Companies like Google (parent of YouTube), Facebook and Twitter have legitimate reasons to take special measures when it comes to graphic violence and content associated with violent extremist groups — it can lead to situations of real-life harm and can be bad for business too. But the question of how they should identify and remove these kinds of content — while preserving essential evidence of war crimes and violence — is far from answered.

The companies have developed their policies over the years to acknowledge that not all violent content is intended to promote or incite violence. While YouTube, like other platforms, does not allow most extremist or violent content, it does allow users to publish such content in “a news, documentary, scientific, or artistic context,” encouraging them to provide contextual information about the video.

But, the policy cautions: “In some cases, content may be so violent or shocking that no amount of context will allow that content to remain on our platforms.” YouTube offers no public information describing how internal mechanisms determine which videos are “so violent or shocking.”

This approach puts the company into a precarious position. It is assessing content intended for public consumption, yet it has no mechanisms for ensuring public transparency or accountability about those assessments. The company is making its own rules and changing them at will, to serve its own best interests.

EU proposal could make AI solutions mandatory

A committee in the European Commission is threatening to intervene in this scenario, with a draft regulation that would force companies to step up their removal of “terrorist content” or face steep fines. While the proposed regulation would break the cycle of companies attempting and often failing to “self-regulate,” it could make things even worse for groups like Orient News.

Under the proposal, aimed at “preventing the dissemination of terrorist content online,” service providers are required to “take proactive measures to protect their services against the dissemination of terrorist content.” These include the use of automated tools to: “(a) effectively address the re-appearance of content which has previously been removed or to which access has been disabled because it is considered to be terrorist content; (b) detect, identify and expeditiously remove or disable access to terrorist content,” article 6(2) stipulates.

If adopted the proposal would also require “hosting service providers [to] remove terrorist content or disable access to it within one hour from receipt of the removal order.”

It further grants law enforcement or Europal the power to “send a referral” to hosting service providers for their “voluntary consideration.” The service provider will assess the referred content “against its own terms and conditions and decide whether to remove that content or to disable access to it.”

The draft regulation demands more aggressive deletion of terrorist content, and quick turnaround times on its removal. But it does not establish a special court or other judicial mechanism that can offer guidance to companies struggling to assess complex online content.

Instead, it would force hosting service providers to use automated tools to prevent the dissemination of “terrorist content” online. This would require companies to use the kind of system that YouTube has already put into place voluntarily.

The EU proposal puts a lot of faith in these tools, ignoring the fact that users, technical experts, and even legislators themselves remain largely in the dark about how these technologies work.

Can AI really assess the human rights value of a video?

Automated tools may be trained to assess whether a video is violent or graphic. But how do they determine the video’s intended purpose? How do they know if the person who posted the video was trying to document the human cost of conflict? Can these technologies really understand the context in which these incidents take place? And to what extent do human moderators play a role in these decisions?

We have almost no answers to these questions.

“We don’t have the most basic assurances of algorithmic accountability or transparency, such as accuracy, explainability, fairness, and auditability. Platforms use machine-learning algorithms that are proprietary and shielded from any review,” wrote WITNESS’ Dia Kayyali in a December 2018 blogpost.

The proposal’s critics argue that forcing all service providers to rely on automated tools in their efforts to crack down on terrorist and extremist content, without transparency and proper oversight, is a threat to freedom of expression and the open web.

The UN special rapporteurs on the promotion and protection of the right to freedom of opinion and expression; the right to privacy; and the promotion and protection of human rights and fundamental freedoms while countering terrorism have also expressed their concerns to the Commission. In a December 2018 memo, they wrote:

‘’Considering the volume of user content that many hosting service providers are confronted with, even the use of algorithms with a very high accuracy rate potentially results in hundreds of thousands of wrong decisions leading to screening that is over — or under — inclusive.’’

In recital 18, the proposal outlines measures that hosting service providers can take to prevent the dissemination of terror-related content, including the use of tools that would “prevent the re-upload of terrorist content.” Commonly known as upload filters, such tools have been a particular concern for European digital rights groups. The issue first arose during the EU’s push for a Copyright Directive, that would have required platforms to verify the ownership of a piece of content when it is uploaded by a user.

“We’re fearful of function creep,’’ Evelyn Austin from the Netherlands-based digital rights organization Bits of Freedom said at a public conference.

‘’We see as inevitable a situation in which there is a filter for copyrighted content, a filter for allegedly terrorist content, a filter for possibly sexually explicit content, one for suspected hate speech and so on, creating a digital information ecosystem in which everything we say, even everything we try to say, is monitored.’’

Austin pointed out that these mechanisms undercut previous strategies that relied more heavily on the use of due process.

‘’Upload filtering….will replace notice-and-action mechanisms, which are bound by the rule of law, by a process in which content is taken down based on a company’s terms of service. This will strip users of their rights to freedom of expression and redress…’’

The draft EU proposal also applies stiff financial penalties to companies that fail to comply. For a single company, this can amount to up to 4 percent of its global turnover from the previous business year.

French digital rights group La Quadrature du Net offered a firm critique of the proposal, and noted the limitations it would set for smaller websites and services:

‘’From a technical, economical and human perspective, only a handful of providers will be able to comply with these rigorous obligations – mostly the Web giants.

To escape heavy sanctions, the other actors (economic or not) will have no other choice but to close down their hosting services.’’

“Through these tools,” they warned, “these monopolistic companies will be in charge of judging what can be said on the Internet, on almost any service.”

Indeed, worse than encouraging “self-regulation,” the EU proposal will take us further away from a world in which due process or other publicly-bound mechanisms are used to decide what we say and see online, and push us closer to relying entirely on proprietary technologies to decide what kinds of content is appropriate for public consumption — with no mechanism for public oversight.

Image by Georgejmclittle on Shutterstock

RDR is now seeking feedback on materials that will be used to develop pilot indicators to evaluate internet, mobile, and telecommunications companies on their policies and disclosures related to how targeted advertising affects the human rights of users and their communities.

As we announced last week, RDR is entering an exciting phase as we prepare to expand the RDR Corporate Accountability Index methodology to keep up with the rapidly-changing technology sector and its impact on human rights. After the release of our inaugural 2015 RDR Index, we introduced extensive revisions to update the methodology for the second RDR Index in 2017. However, we have only introduced minor revisions to the methodology since the 2017 RDR Index was released. In 2019 and 2020, we will expand and upgrade the RDR Index methodology to include new company types (such as Amazon and Alibaba), and will add new indicators that will address some of the pressing issues at the intersection of human rights and technology that have emerged since the current methodology was first developed.

Specifically, RDR will work to determine how and to what extent the RDR Index methodology can be expanded to address malicious exploitation of platforms optimized for targeted advertising, as well as the unaccountable and non-transparent application of algorithms and machine learning. We are starting with a focus on targeted advertising and the company practices that it incentivizes, including some uses of algorithms and machine learning.

Why targeted advertising?

Our goal in developing indicators that address targeted advertising is to set global accountability and transparency standards for how major, publicly traded internet, mobile, and telecommunications companies that profit from targeted advertising can demonstrate respect for human rights online. In the future, RDR’s work in this area can inform the work of other stakeholders: investors conducting due diligence on portfolio risk; policymakers seeking to establish regulatory frameworks to protect the individual and collective rights of internet users; and activists looking to encourage companies to pursue alternative business models and to mitigate the human rights harms associated with targeted advertising.

Progress Update

We held our first stakeholder consultation in January in Brussels, where experts on privacy and data protection helped us refine a set of consultation documents that we are now sharing for feedback. We will be convening a series of stakeholder consultations (in person in various locations and via conference call) over the next several months, where we will solicit input from experts in civil society, companies, and government. If you would like to participate in such a convening, please let us know via email to methodology@rankingdigitalrights.org.

Consultation Documents

Consulting with a wide range of experts and stakeholders—including companies that are likely to be evaluated—is key to developing a methodology that is credible, rigorous, and effective. To that end, we have prepared a set of consultation documents that synthesize RDR’s approach to targeted advertising and human rights:

  1. Rationale for RDR’s methodology expansion to address targeted advertising: an overview of why and how the RDR research team is approaching the indicator development process.

  2. Human Rights Risk Scenarios: a list of “risk scenarios,” each describing human rights harms directly or indirectly related to privacy and expression that can result from targeted advertising business models and the choices they incentivize companies to make.

  3. Best Practices: a number of best practices for company disclosure and policy that could help prevent or mitigate these risks.

Send us your feedback

We welcome written feedback by May 31 on these consultation documents. The feedback will help to inform further in-person stakeholder and expert consultations that will take place between April and June, which in turn will inform the drafting of pilot indicators that will be tested later in 2019. Please send all feedback to methodology@rankingdigitalrights.org. We look forward to hearing from you.

Image by Artem Samokhvalov on Shutterstock

The 2019 Ranking Digital Rights Corporate Accountability Index will be released in May. The exact date and location will be announced next month. Watch our website for launch details, or sign up for our newsletter.

The 2019 RDR Index—the fourth RDR Index since the first was launched in 2015—is made possible by the hard work of our research team and active engagement by many of the companies we rank. Meanwhile, RDR has big plans in the works for the next three years.

We plan to upgrade and expand the Index methodology in 2019 and 2020 to address the rapidly evolving, increasingly complex human rights threats that internet users face. The fifth RDR Index will be published in 2021 with an expanded methodology and scope.

RDR has kept its methodology consistent since 2017 in order to track companies’ progress over time and provide companies with predictability. After we publish the 2019 Index this May, we will expand our indicators to address human rights harms associated with targeted advertising, algorithms, and machine learning. We will adapt the methodology to include more company types, especially powerful global platforms with core e-commerce businesses such as Amazon and Alibaba. We will also review the current methodology and research process and consider other potential changes in light of how technology and the companies we rank are evolving. The new methodology will be finalized by mid-2020 so that research can begin for the fifth Index, to be released in 2021.

Public consultation

We have started preliminary research and stakeholder consultations needed to draft indicators addressing human rights harms associated with targeted advertising.  This month we will publish an update about our progress, release our first set of consultation documents, and invite feedback from all interested parties.

As our timeline for the rest of our methodology expansion and revision work progresses, we will continue to post updates and invite participation in the consultation process. The best way to keep up with our progress and plans is to subscribe to our newsletter here.

Organizational growth

In the second half of 2018, we conducted an impact assessment and undertook a process of strategic planning. That process enabled us to sharpen the way we articulate our mission and vision, and theory of change, as well as how we describe our impact.

Our strategic assessment and planning process also enabled us to make some other key decisions about RDR’s priorities for the next three years. In addition to upgrading, strengthening and expanding the Index, we will focus on three other strategic priorities: increase our impact, visibility, and engagement; strengthen organizational structure and capacity; and diversify funding and substantially increase our budget. For more information, please see our new strategic priorities page.

Over the past five years we have proven the value of the RDR Index. Now the time has come to scale up for long-term impact and sustainability. As we enter this new phase, we look forward to working with companies, researchers, civil society advocates, investors, policymakers, and all other stakeholders who share our vision of a global internet that supports and sustains human rights. If you are interested in working with us to take RDR and the Index methodology to the next level, contact us. We look forward to hearing from you.

A group of investors has endorsed the Ranking Digital Rights (RDR) Corporate Accountability Index as an important tool for helping tech companies meet their human rights responsibilities and for helping investors identify digital rights risks.

In December, 49 members of the Investor Alliance for Human Rights (IAHR), a coalition of global funds focused on advancing corporate human rights due diligence, issued a statement to the 22 internet, mobile and telecommunications companies evaluated in the Ranking Digital Rights (RDR) Corporate Accountability Index urging these companies to use the RDR Index to improve governance systems and performance on salient human rights risks related to privacy and freedom of expression.

The group also highlighted growing financial and reputational risks in the ICT sector due to the mishandling of user data and real and potential human rights abuses. In the statement, the investors say they rely on the RDR Index to assist in investment decision-making and to inform corporate engagements with the ICT sector. Investors argue that, as custodians of their users’ data and digital rights, these companies have a responsibility to respect users’ right to privacy and freedom of expression and must be accountable for how they handle users’ data.  

RDR is proud to be recognized by a growing number investors who hold shares in the world’s most powerful internet, mobile, and telecommunications companies as an indispensable tool for conducting due diligence on potential risks in their portfolios and engaging with those companies about how they can improve respect for users’ rights. As Rosa van den Beemt of NEI Investments stated in today’s IAHR press release: “As investors we are committed to using the transparent and comparable data the RDR Index provides to hold companies accountable.

RDR maintains an investor resource page which includes useful materials such as links to an October 2018 investor webinar hosted by IAHR, and our 2018 Investor Update analyzing the relationship between the 2018 RDR Index findings on company policies and disclosures and some of the key developments reflected in last year’s negative headlines about several of the world’s most powerful internet companies.

Most notably, investors turned to RDR data and analysis in the wake of the revelation that, unbeknownst to its users, Facebook data was shared with the political research firm Cambridge Analytica in order to influence the 2016 U.S. presidential election. In May 2018 Domini Funds cited RDR in announcing its decision to sell all Facebook holdings. RDR research was also cited in an open letter from 78 organizations to major Facebook shareholders, and in a shareholder resolution by Arjuna Capital.

We look forward to engaging further with the investment community on the findings of the 2019 RDR Index, to be released in May.

This post is published as part of an editorial partnership between Global Voices and Ranking Digital Rights. Global Voices’ advocacy director Ellery Roberts Biddle co-authored this piece.

A Google music event in China, 2009. Photo by Keso via Flickr (CC BY 2.0)

The secret is out — Google is building a search engine for China.

After deflecting questions from reporters for months, CEO Sundar Pichai acknowledged in October Google’s plan to build a mobile app that will serve Chinese users — and thus comply with Chinese government censorship mandates.

But big questions remain. Namely, how will this actually work? To keep the censors happy, Google will need to invest significant human, financial and technical resources to keep up with China’s unique and exhaustive approach to controlling online information and speech. While the company may be prepared to make some concessions (and substantial investments) in order to enter the Chinese market, this move will force Google to undermine its own commitments ”to advancing privacy and freedom of expression for [its] users around the world.”

Google’s DragonFly program also raises questions about what responsibilities companies —  and in particular tech companies — have to protect and respect human rights of their users, not just in the company’s home market but in every market in which they operate. Rights groups and experts agree that these companies should conduct human rights impact assessments before entering new markets or launching new products in order to identify how aspects of its business may affect freedom of expression and privacy and to mitigate any risks posed by those impacts.

How to censor the internet (by Chinese standards)

In contrast to US-based companies, which are largely shielded from liability for illegal content, Chinese internet giants are obligated to proactively censor illegal and politically sensitive content and report it to the authorities. If Google does enter the Chinese market, it can expect to be held to the same standard.

What counts as a illegal content? This is dictated by the country’s far-reaching Cybercrime law, along with an ever-evolving set of demands from high-level party and government officials in the Cyberspace Administration.

China’s cybersecurity law bans Internet users from publishing information that damages “national honor”, “disturbs economic or social order” or is aimed at “overthrowing the socialist system”. The law also requires internet companies to collect and verify users’ identities whenever they use major web sites or services.

Censorship of politically-sensitive keywords is a powerful component of this system. Alongside terms that have long been outlawed, such as “human rights” and “Tiananmen Square”, there is a constant churn of new censorship requests from above, driven by current events and hot topics on social media. Earlier this year, for example, censors moved to ban phrases like “anti-sexual harassment” in the wake of the #metoo movement spreading to China.

In order to comply and keep up with state demands, large tech firms in China invest substantial financial and human resources into the work of keeping their sites “clean” and legal. Companies enlist layers of individuals as part of this effort, ranging from full-time employees to community “advisors” to “civilization volunteers” who promote positive messages about the Communist Party (and drown out negative ones). An unofficial estimate by a Japanese media outlet in 2014 put the number of people employed in the internet censorship sector at eight million.

Artificial intelligence is also becoming a bigger part of this industry, though there is still relatively little known about how companies are building censorship decision-making mechanisms into their systems.

For foreign companies like Google, there also are extra hurdles when it comes to data storage. As Google will be collecting user data (The Intercept reports that users in China will need to log in before they can search), the company will need to run a data center with a local partner, as per China’s Cybersecurity Law. Drawing on a leaked internal memo, The Intercept contends that the Chinese partner company would have “unilateral access” to users’ search data.

To keep up with these demands, Google may need to substantially change its model for content distribution and moderation, not to mention data collection. This will surely put Google’s principles of openness and preserving free speech to the test.

Profit before human rights: ‘a race to the bottom’

In its early years, Google did comply with censorship requests from the Chinese government. But it stopped censoring search results in China in 2010, after suffering a major cyber attack from within the country, that was aimed at Chinese human rights activists. After the attack, the company began directing traffic from mainland China to its Hong Kong version, which was relatively open, similar to the rest of the world. Within months, the company’s services were fully blocked in mainland China.

Google’s decision was applauded by internet freedom activists, both in and outside of China, and placed Google into a unique category. It became a company that chose to change its agenda (and likely lose profits) in order to protect human rights.

Google was forced to leave China in 2010. Image by Flickr user Josh Chin (CC BY-NC 2.0)

Isaac Mao, a Hong-Kong based entrepreneur and founder of Musicoin Project, looked back on the 2010 move: “Google’s action then enlightened a lot of people to pay attention to censorship issues, that [was] historical.”

Although Google officially removed its servers from China in 2010, it still maintains a presence in the Chinese market by investing in local startups and an artificial intelligence research center in Beijing. But the decision to bring its flagship products back to China, on the Chinese government’s terms, represents a true paradigm shift for the industry as a whole.

Mao sees Google’s plan to re-enter China as being entirely profit-driven.

“Chinese internet users suffer a lot and they really want to see Google hold a high level of morality…instead of just caring about the single digits of the market share,” he told us.

Alongside the changes that this will bring for Google, experts say this shift will encourage other tech companies (whose services are currently blocked in China) to seek their own share of the Chinese market.

“It would embolden other companies to also lower their human rights standards for the Chinese market. And then it becomes a race to the bottom,” said Yaqiu Wang, the China researcher at Human Rights Watch.

Lokman Tsui, a professor at Chinese University of Hong Kong who once worked for Google told us that the move will also make it easier for governments around the world to impose stricter censorship regimes on Google. He said:

“In negotiations with governments around the world, any government can now say, ‘you can do that kind of censorship for china, but you cannot do this for us?’”

Joining the party in Beijing

Google’s decision to bring its flagship products back to the Chinese market is unsurprising and seems to correspond to a broader trend among major US companies. Facebook, LinkedIn and Apple, to name a few, have all sought to establish stronger footing in China in recent years — though only some have succeeded.

In 2014, LinkedIn launched a Chinese version of its service which prevents mainland users from accessing content forbidden by the Chinese government. Speaking with The Guardian after users complained about political content being blacked out on the site, LinkedIn Asia-Pacific staff member Roger Pua explained that this was intended “to protect the privacy and security of the member who posted that content.”

Apple products have long been available for purchase in China. Throughout the years, the tech giant has also been complicit in censoring its Chinese users by cracking down on VPNs, removing the New York Times app from its China store, and censoring the Taiwan flag emoji. In early 2018, Apple agreed to store user data locally to comply with the country’s 2017 cybersecurity law, in a move that was slammed by human rights groups and privacy advocates.

While both Apple and LinkedIn have encountered challenges along the way, it appears that for both companies, the strategic business decision to move into China has so far been effective. That said, neither of these companies have nearly as much power over what people say and see online as Google.

What next?

It is clear that Google’s executives and leadership are prioritising profit over openness in this move. What remains unclear now is how far is the company willing to go to get the Chinese government’s blessing.

Google will be facing fierce competition from Chinese tech companies, and Baidu in particular which dominates the search engine market in the country. And while many users in China may be willing to switch to Google due to public disappointment with Baidu, Chinese companies have an edge: their close ties to the Chinese government.

Most Chinese companies “have very deep local or central government relationships, and in china the relationship is everything,” Mao said. “If they are not satisfied they can shut you down overnight.”

Chinese companies “have better relations with the Chinese government,” Tsui said. “This is something Google never will be better at, nor should they want to be better at that.”

In April this year, authorities ordered Toutiao or Today’s Headline, China’s most popular information platform, to shut down its affiliated social media application, NeihanShequ, which allowed users to submit jokes and riddles for others to comment on. NeihanShequ was banned on the grounds that it was “heading a wrong direction with its vulgar and banal content.” This came despite a public apology from the company’s CEO Zhang Yiming who also promised that Toutiao will strengthen self-censorship measures by increasing the pre-screen staff team from 6,000 to 10,000 people.

Many of the largest tech companies in the country, including Baidu, even have a designated Communist Party branch within their office. In 2017, the government made a big push for companies to do this, offering them cash and other incentives in exchange for even more access to corporate activities and control.

Earlier this month, the micro-blogging platform Weibo gave 1,322 accounts affiliated with government entities including public security bureaus and cyberspace offices the direct authority to label posts as “rumors”. Weibo will not even play a role in the screening process.

It is difficult to imagine that Google would ever give Chinese authorities the ability to label its content, or that the Silicon Valley superpower would deign to establish a Chinese Communist Party branch within its own Beijing office. But the company will surely be asked to make substantial concessions to the government. So where will Google draw the line? If and when this happens, will the company be asked to leave?

These and many other unknowns leave many wondering if Google is really taking “a longer-term view” here, as its CEO maintains.