Apple is in the hot seat this week, but the reality is that none of the companies that control the flow of your personal data, your access to information, or your ability to publish and communicate through your smartphone are doing enough to respect your privacy or freedom of expression.

Last January, a prominent billboard near the Consumer Electronics Show declared: “What happens on your iPhone, stays on your iPhone.” As support for privacy rights and data protection grows around the world, Apple has been positioning itself as the privacy-respecting alternative to companies like Google whose business models rely on the collection and commodification of user information at a massive scale, proclaiming its belief that “Privacy is a fundamental human right.” But does the reality live up to the hype?

Two new articles published this week suggest that Apple has work to do if its privacy practices are to live up to its claims. In a recent Washington Post piece, journalist Geoffrey Fowler examined all the ways that apps track users’ iPhone activity. The week-long experiment revealed that 5,400 trackers “guzzled” Fowler’s data, and that many apps’ data collection practices differed from their privacy policies and other policy documents. Separately, The Verge reported that three iTunes users were suing Apple for allegedly making data about individual users’ listening habits available to data brokers and advertisers.

Apple’s privacy issues are by no means unique among smartphone companies. Rather, Apple’s claims about its robust protection of privacy are what set it apart from its competitors, and journalists should continue to point out the gaps between the company’s claims and reality. But as findings from the 2019 Ranking Digital Rights Corporate Accountability Index show, while Apple ranks relatively well on transparency about policies and practices affecting user privacy, it has persistently fared even worse with respect to another fundamental human right: freedom of expression.

While most of the 24 companies evaluated in this year’s Index demonstrated a weaker commitment to respect users’ freedom of expression than users’ privacy, Apple displayed the widest gap by far, as the graphic below illustrates. It was the only company in the entire Index to receive full credit for its commitment to privacy as a human right and no credit for making a similar commitment to freedom of expression.

Gaps in governance and oversight over users’ freedom of expression, 2019 RDR Index. Most companies displayed a weaker commitment to respect users’ freedom of expression than to users’ privacy, disclosing less oversight, due diligence, or other processes to identify and mitigate threats to users’ freedom of expression. For more information, see the 2019 RDR Index report.

Apple’s transparency about policies and practices affecting freedom of expression ranked lower in the 2019 RDR Index than any other U.S.-based internet or mobile company, as we point out in Apple’s 2019 RDR Index report card published on May 16. On May 29 in advance of its Worldwide Developers Conference (WWDC), Apple unveiled a new section of its website featuring information about its App Store policies and practices. Yet while the new section makes such disclosures more prominent, Apple still discloses only limited information about its process for enforcing its rules in the App Store or how it determines whether an app is breaking those rules. Even now, despite the fact that the company is widely reported to remove apps in response to government demands around the world—including in China—there is no information to be found on the company’s website about how Apple handles government requests to remove content from the App Store, much less data about the kinds of apps that are censored in various countries around the world. (While its Transparency Reporting page states that starting with its report for July 1 – December 31, 2018, it will begin to report on government requests to take down apps from the App Store in instances related to alleged violations of legal and/or policy provisions, it has yet to publish any such information.)

While iPhone users have reason to demand greater transparency and accountability from Apple, users of Android devices—whether they are using handsets sold directly by Google or phones from other device manufacturers like Samsung that also run on Android—also face threats to privacy and freedom of expression that the companies fail to mitigate or disclose to users. Since 2017 the Ranking Digital Rights Corporate Accountability Index has been evaluating the mobile ecosystems controlled by Apple, Google, and Samsung. While Apple significantly improved its disclosures between 2017 and 2018, our data shows much less progress in the past year. Google made few improvements to its disclosures about Android. As for Samsung, it disclosed significantly less than either Apple or Google, and its overall score declined since 2018. (See Google’s 2019 RDR Index report card and Samsung’s report card.)

The growing reach of smartphones

Smartphones and apps are front and center in the fight for privacy and freedom of expression across the global internet: over half of the world’s 4.3 billion internet users access the internet primarily through apps on their mobile phones, instead of browsers on a desktop or laptop computer.

The relative affordability of mobile phones has contributed to their growing global popularity as a primary means of using the internet. As a result, any risks to mobile users’ rights to freedom of expression, access to information, and privacy are compounded for low-income and other vulnerable internet users who are more likely to use older, less expensive devices. These older devices are inherently more vulnerable to malware, targeted hacking, non-consensual data collection, and other harms than newer and more expensive models.

Mobile ecosystems” are an indivisible set of goods and services offered by a mobile device company, comprising the device hardware, operating system, app store, and user account.  Alarmingly, and despite improved transparency in other areas, the 2019 RDR Corporate Accountability Index found that neither Apple nor Google—whose operating systems together account for 98% of the world’s smartphones—had taken enough meaningful steps to improve their disclosure about how their mobile products impact users’ human rights since we started evaluating mobile ecosystems since the previous year.

In addition to Apple’s iOS and Google’s Android ecosystems, we evaluated device manufacturer Samsung and 12 global telecommunications companies, whose modifications to the stock Android operating system can also have significant effects on device security. Across the board, companies failed to show key information that users have the right to know, with the two main players demonstrating opposite strengths and weaknesses: overall, Apple scored higher than Google on privacy but much lower on freedom of expression, while Google disclosed more information about policies affecting users’ freedom of expression but less about the Android ecosystem’s respect for user privacy.

Mobile ecosystem scores, 2019 RDR Index. For full data, see here.

App stores and freedom of expression

App stores have become gatekeepers with tremendous power to control what types of apps are available, to whom, under what conditions, and what kinds of user data they can collect. This is especially true of the Apple mobile ecosystem, as users can only install apps through Apple’s proprietary App Store (unless they modify their device in ways that are disallowed by Apple, such as jailbreaking it). In contrast, Android users can download apps from third-party app stores rather than exclusively from the Google Play Store, as well as “side-load” apps without going through an app store.

Very little is known about censorship within the various app stores. Like other platforms that host content produced by third-parties, app stores receive requests from governments and from private third-parties to remove or restrict content. News apps, VPNs (which help users get around China’s technical censorship system), the Taiwanese flag emoji, and even individual songs have all disappeared from Apple’s platforms in the PRC, with no explanation from the company.

Google’s Android was the only mobile ecosystem in the 2019 RDR Index to publish any data about the volume and nature of content and accounts restricted for violating the Play Store’s rules (see the findings for 2019 RDR Index indicator F4.1), although this data was not comprehensive or published regularly. Apple failed to provide enough information to users about its process for evaluating requests for content restriction (see indicator F5), its process for enforcing its own terms of service, or the volume and nature of apps that it removed or restricted for violating its rules (see F4.1). Samsung, which operates its own Galaxy Store, did not disclose such information, either.

Data collection and privacy

Privacy of location data is especially important for mobile ecosystems because people tend to keep their devices on them at all times. Historical data about where the device has been reveals extremely sensitive and personal information. The Android ecosystem in particular needs to limit the collection of location data by Google and by third-party apps.

Google received only partial credit on the 2019 RDR Index indicator P7.5, which evaluates whether the company clearly discloses that it provides users with options to control the device’s geolocation functions. The company had previously received credit for such disclosure but, in August 2018, the Associated Press found that Google saves users’ location history even if they have disabled “Location History” on mobile devices. Google has since revised its page on managing location data, stating that some services may still save users’ data even if location data is turned off. For journalists and activists to safely conduct their work, they must have the ability to control who can track their whereabouts and for what purposes. Similarly, people have the right to know if key location data, such as visits to hospitals, are shared with insurance companies. Such data sharing practices have a strong potential to affect insurance rates and access to healthcare in ways that are inherently discriminatory.

While Apple disclosed that it requires apps made available through its App Store to have a privacy policy (see indicator P1.4), it did not disclose if it evaluates the substance of individual apps’ privacy policies to ensure that they provide users with adequate information about their privacy rights, such as what user information the apps collect and share. This begs the question: how meaningful are policies governing third-party developers if Apple doesn’t enforce them? If Apple is to live up to its promise that “What happens on your iPhone, stays on your iPhone,” it must substantively evaluate the content of apps’ privacy policies and verify that each app adheres to its own policies, notably regarding collection of user data (see P3).

Security risks unique to mobile devices

Low-income internet users of Android devices produced by manufacturers like Samsung, who often make changes to the stock Android operating system that affect how quickly users can access security updates, are especially vulnerable. As we highlighted in the 2017 RDR Index, such changes to the Android mobile operating system can hinder the timely delivery of software updates, including security updates, that are key to device security and user privacy. Samsung no longer disclosed what changes it introduced to the Android mobile operating system (P14), though it had previously disclosed some information about such modifications.

Telecommunications providers can also make such changes affecting how quickly users can access security updates (P14.6). None of the telecommunications companies evaluated in the 2019 RDR Index disclosed such information. Manufacturers and telecommunications companies all need to be much more transparent about the changes they make to the Android operating system and how the changes affect users’ device security.

Android models from the Nexus and Pixel product lines and iOS devices receive updates directly from Google and Apple, respectively, but neither company gives users all the information they need about device security. Google was the only company to disclose how long various device models would be guaranteed to receive software updates—a “best by” date for smartphones—though it did not commit to providing security updates for five years after a new model’s release (a reasonable expectation, given how expensive devices can be). Apple and Samsung did not provide such information, making it difficult for users to evaluate for how long their devices will be safe to use.

Demanding more of companies

Any device designed to curate content, facilitate speech, collect data, and allow multiple third-parties to collect reams of personal information—including physical location around the clock—poses a significant threat to human rights. Users should be concerned that these companies have made so little progress when it comes to respecting freedom of expression and privacy on mobile devices: none of these companies score more than 60% on RDR’s indicators measuring mobile ecosystems’ transparency.

The 2019 RDR Index includes a series of policy recommendations that mobile ecosystem companies can and should adopt to ensure their users’ safety and rights online, including:

Apple

  • Be transparent about restrictions to freedom of expression: Apple should make its terms of service easier to find and understand, and it should publish data about actions it takes to enforce its own rules, and about actions it takes to remove content as a result of government and other third party demands (as it states that will start doing for the July 1 – December 31, 2018 period).
     
  • Enforce rules protecting user privacy: Apple should enforce rules governing third-party apps’ collection of user information, and publish data about its actions.
     
  • Guarantee security updates for five years: Apple should ensure its devices are safe to use for at least five years after release, and publish this “best by” date.
     

Google

  • Be transparent about enforcing the company’s own rules: Google should provide comprehensive data about restrictions to the Play Store due to its own terms of service enforcement. It should publish this information at least once a year, as a structured data file.
     
  • Do more to protect privacy: Google should clarify what information it collects and shares, and for what purpose—and give Android users clear options to control what data is collected about them (notably location data).
     
  • Guarantee security updates for five years: Google should increase the duration for which it guarantees new devices will receive security updates from three to five years.
     

Samsung

  • Be transparent about third-party requests: Samsung should publish data about third-party requests for content and account restrictions, and for user data.
     
  • Improve security disclosures: Samsung should be more transparent about measures it takes to keep user information secure, and if it encrypts user communication and private content.
     
  • Commit to providing timely security updates: Samsung should disclose what modifications it makes to the Android operating system, if any, and how such changes affect the company’s ability to send security updates to users. It should commit to provide security updates for the operating system and other critical software for a minimum of five years after release, and to do so within one month of a vulnerability being announced to the public.
     

Telecommunications companies

  • Commit to providing timely security updates: Telecommunications companies should disclose what modifications they make to the Android operating system, if any, and how such changes affect users’ access to security updates. In all cases, users should be able to install security updates within one month of a vulnerability being announced to the public.
     

Click here to read the full 2019 RDR Index report.

This week, Ranking Digital Rights has submitted a set of recommendations in response to Facebook’s call for feedback on its Draft Charter for its recently proposed Oversight Board—an independent body to which people can appeal Facebook’s content moderation decisions.

Facebook has come under intensifying fire for the range of ways that its platform has been used to incite violence and spread disinformation campaigns—as well as for the lack of transparency around how it develops and enforces its Community Standards. These standards determine what types of content the company deletes from its platform and have a powerful impact on what viewpoints are silenced and whose, in effect, are amplified.

The direct link between Facebook’s content moderation policies and its users’ right to freedom of expression are a longstanding concern of internet activists and watchdog groups, including Ranking Digital Rights. In April 2018, in response to mounting pressure, Facebook published its internal guidelines for how it enforces its Community Standards. It also launched a new appeals process for users whose content may have been wrongfully removed.

These are both laudable improvements, but a far cry from what is needed to ensure adequate freedom of expression protections for Facebook’s over 2 billion users worldwide. The 2019 RDR Corporate Accountability Index, published on May 16, revealed that Facebook’s grievance and remedy mechanisms—including its appeals process for content removals—were among the weakest of any company in the RDR Index, even after introducing improvements to its appeals process over the last year.

The Draft Charter to which we have responded outlines Facebook’s proposal for the creation of the Oversight Board, with questions and considerations on the Board’s membership and role in the appeals process. We commend Facebook for publicly disclosing and seeking input on the Draft Charter for its Oversight Board, and welcome the opportunity to help inform and improve its content moderation policies and appeals processes.

The 2019 RDR Index findings offer a roadmap for how Facebook can and should improve its practices. Our recommendations submitted to company representatives this week highlight the need for Facebook to clarify the Oversight Board’s role in implementing the company’s commitment to respect human rights. We believe that clearly grounding the Oversight Board’s mandate in international human rights standards is essential given Facebook’s struggle to grapple with how to make decisions affecting users’ freedom of expression, and how Facebook users’ speech affects the rights of others on the platform.

We also stress the need for the Oversight Board to contribute to the company’s human rights impact assessment process, which should include assessments of how the content and enforcement of the company’s Community Standards affect the human rights of users and communities around the world. The Oversight Board should also be empowered to make recommendations regarding the company’s Community Standards and processes for enforcement. In addition, we urge the Board to regularly publish data about the nature and volume of its decisions.

Recommendations submitted by other concerned stakeholders emphasize the high stakes involved in Facebook’s content moderation decisions and its ability to impact users’ rights around the world. David Kaye, the UN Special Rapporteur on freedom of opinion and expression, submitted a letter to Mark Zuckerberg urging Facebook to include human rights principles in the Board’s review standards, noting that “…company standards based on “vague assertions of community interests” has “created unstable, unpredictable and unsafe environments for users and intensified government scrutiny”—the very problems that the creation of the Board seeks to address.” We wholeheartedly agree with these recommendations. Ranking Digital Rights is also a signatory to a joint statement with recommendations from civil society, investors, and academics.

As findings from the 2019 RDR Index show, most companies are not transparent enough about who has the power to control what they can say or see online, even as government pressure on companies to control online speech increases globally. Facebook’s proposed Oversight Board is an opportunity for increased transparency and accountability over the company’s own actions to police content on its platform. We look forward to the company’s responses to the feedback it has received during the public consultation process.


Microsoft has unseated Google at the top of the 2019 RDR Corporate Accountability Index. Telefónica outpaced Vodafone among telecommunications companies. Yet despite progress, most companies still leave users in the dark about key policies and practices affecting privacy and freedom of expression, according to the 2019 Ranking Digital Rights Corporate Accountability Index, released today.

The 2019 RDR Index evaluated 24 of the world’s most powerful internet, mobile ecosystem, and telecommunications companies on their disclosed commitments, policies, and practices affecting users’ freedom of expression and privacy, including governance and oversight mechanisms. Research showed that in the past year a majority of companies improved and clarified policies affecting users’ privacya trend that appears to be driven by new data protection regulations in the EU and elsewhere. But even the leading companies fell short in key areas. Few scored higher than 50 percent, failing to even meet basic transparency standards, leaving users across the globe in the dark about how their personal information is collected and protected—and even profited from.

Companies evaluated by the 2019 RDR Index collectively provide products and services used by more than half of the world’s 4.3 billion internet users, thus providing a snapshot of the extent to which users’ rights are protected and respected across the globe. The RDR Index methodology sets minimum standards for what companies should disclose about their rules and processes for enforcing them, data privacy and security policies and practices, and how they handle government demands to remove or block content, to shut down internet services, or to access user information and communications.

Company highlights

  • Microsoft ranked first, due to strong governance and consistent application of its policies across all services. It unseated Google, which had held a decreasing lead since the first RDR Index in 2015.

  • Telefónica shot ahead of all other telecommunications companies. Vodafone, which led in 2018, earned second place, ahead of AT&T, which dropped to third.

  • Facebook maintained fourth place among internet and mobile ecosystem companies, but received a score of just 57% and lagged behind RDR Index leaders in key areas. It showed no evidence of risk assessments on its use of AI or terms of service enforcement, and despite some improvements still disclosed less than a number of its peers about many aspects of how it handles user information.

Click here to view report cards for all 24 companies evaluated by the 2019 RDR Index. An in-depth report analyzing the 2019 RDR Index results across companies and issue areas elaborates on how the world’s most powerful tech companies have a long way to go before the internet supports and sustains human rights for everyone.

“People have a right to know, and companies have a responsibility to show,” said Ranking Digital Rights Director Rebecca MacKinnon. “When companies fail to meet RDR’s standards for disclosure of commitments, policies, and practices, users are exposed to undisclosed risks affecting their freedom of expression and privacy.”

For the full interactive data and analysis, report cards for all 24 companies, methodology, raw data, and other resources for download, please visit: rankingdigitalrights.org/index2019. Follow the conversation on Twitter using the hashtag #rankingrights.

Follow our 2019 RDR Index launch events online and in person:

On May 16th the full results of the 2019 Ranking Digital Rights Corporate Accountability Index will be released online on the RDR website, with key findings to be presented at the Stockholm Internet Forum.

Find out which companies have improved since the 2018 RDR Index—and how. Then join the global conversation about what companies and governments need to do in order to improve the protection of internet users’ human rights around the world.

More details about the timing of our Stockholm launch, and how to follow it online, will be posted on the SIF website and our events page in the coming weeks.

We are also pleased to announce several other launch events in May and June:

May 21, Washington DC: U.S. launch of the 2019 RDR Corporate Accountability Index at New America (9:30am Eastern time).

May 23, Palo Alto, CA: West Coast launch of the 2019 RDR Corporate Accountability Index at the Stanford Global Digital Policy Incubator (1:30pm Pacific time). 

June 11-14, Tunis, Tunisia: 2019 RDR Index session at RightsCon, exact date and time to be announced.

Our events page will be updated with more details about these and other events as they become available.

Subscribe to our newsletter to keep up with our plans and make sure you get the results of the 2019 RDR Index as soon as they are published!

This post is published as part of an editorial partnership between Global Voices and Ranking Digital Rights.

Raqqa, Syria in August 2017. Videos posted by media and rights groups of the war on Youtube started disappearing after the platform introduced a new AI targeting terrorist content. Image via Wikimedia Commons by Mahmoud Bali (VOA) [Public domain]

A new video on Orient News’ YouTube channel shows a scene that is all too familiar to its regular viewers. Staff at a surgical hospital in Syria’s Idlib province rush to operate on a man who has just been injured in an explosion. The camera pans downward and shows three bodies on the floor. One lies motionless. The other two are covered with blankets. A man bends over and peers under the blanket, perhaps to see if he knows the victim.

Syrian media outlet Orient News is one of several smaller media outlets that has played a critical role in documenting Syria’s civil war and putting video evidence of violence against civilians into the public eye. Active since 2008, the group is owned and operated by a vocal critic of the Assad regime.

Alongside their own distribution channels, YouTube has been an instrumental vehicle for bringing videos like this one to a wider audience. Or at least it was, until August 2017 when, without warning, Orient News’ YouTube channel was suspended.

After some inquiry by the group, alongside other small media outlets including Bellingcat, Middle East Eye and the Syrian Archive — all of whom also saw some of their videos disappear — it came to light that YouTube had taken down hundreds of videos that appeared to include “extremist” content.

But these groups were puzzled. They had been posting their videos, which typically include captions and contextual details, for years. Why were they suddenly seen as unsafe for YouTube’s massive user base?

Because there was a new kind of authority calling the shots.

Just before the mysterious removals, YouTube announced its deployment of artificial intelligence technology to identify and censor “graphic or extremist content,” in order to crack down on ISIS and similar groups that have used social media (including YouTube, Twitter and the now defunct Google Plus) to post gruesome footage of executions and to recruit fighters.

Thousands of videos documenting war crimes and human rights violations were swept up and censored in this AI-powered purge. After the groups questioned YouTube about the move, the company admitted that it made the “wrong call’’ on several videos, which were reinstated thereafter. Others remained under a ban, due to “violent and graphic content.”

YouTube’s hasty removal of these videos highlights the problems of using automated tools to flag and remove materials — and why platforms need to be more transparent about their processes for policing content. Even when platforms like YouTube, Facebook, Instagram, and Twitter are clear about what types of content are banned, few provide clear information about what content they remove and why. This makes it difficult for users to understand why content has been removed and how to seek remedy when their rights are violated.

The myth of self-regulation

Companies like Google (parent of YouTube), Facebook and Twitter have legitimate reasons to take special measures when it comes to graphic violence and content associated with violent extremist groups — it can lead to situations of real-life harm and can be bad for business too. But the question of how they should identify and remove these kinds of content — while preserving essential evidence of war crimes and violence — is far from answered.

The companies have developed their policies over the years to acknowledge that not all violent content is intended to promote or incite violence. While YouTube, like other platforms, does not allow most extremist or violent content, it does allow users to publish such content in “a news, documentary, scientific, or artistic context,” encouraging them to provide contextual information about the video.

But, the policy cautions: “In some cases, content may be so violent or shocking that no amount of context will allow that content to remain on our platforms.” YouTube offers no public information describing how internal mechanisms determine which videos are “so violent or shocking.”

This approach puts the company into a precarious position. It is assessing content intended for public consumption, yet it has no mechanisms for ensuring public transparency or accountability about those assessments. The company is making its own rules and changing them at will, to serve its own best interests.

EU proposal could make AI solutions mandatory

A committee in the European Commission is threatening to intervene in this scenario, with a draft regulation that would force companies to step up their removal of “terrorist content” or face steep fines. While the proposed regulation would break the cycle of companies attempting and often failing to “self-regulate,” it could make things even worse for groups like Orient News.

Under the proposal, aimed at “preventing the dissemination of terrorist content online,” service providers are required to “take proactive measures to protect their services against the dissemination of terrorist content.” These include the use of automated tools to: “(a) effectively address the re-appearance of content which has previously been removed or to which access has been disabled because it is considered to be terrorist content; (b) detect, identify and expeditiously remove or disable access to terrorist content,” article 6(2) stipulates.

If adopted the proposal would also require “hosting service providers [to] remove terrorist content or disable access to it within one hour from receipt of the removal order.”

It further grants law enforcement or Europal the power to “send a referral” to hosting service providers for their “voluntary consideration.” The service provider will assess the referred content “against its own terms and conditions and decide whether to remove that content or to disable access to it.”

The draft regulation demands more aggressive deletion of terrorist content, and quick turnaround times on its removal. But it does not establish a special court or other judicial mechanism that can offer guidance to companies struggling to assess complex online content.

Instead, it would force hosting service providers to use automated tools to prevent the dissemination of “terrorist content” online. This would require companies to use the kind of system that YouTube has already put into place voluntarily.

The EU proposal puts a lot of faith in these tools, ignoring the fact that users, technical experts, and even legislators themselves remain largely in the dark about how these technologies work.

Can AI really assess the human rights value of a video?

Automated tools may be trained to assess whether a video is violent or graphic. But how do they determine the video’s intended purpose? How do they know if the person who posted the video was trying to document the human cost of conflict? Can these technologies really understand the context in which these incidents take place? And to what extent do human moderators play a role in these decisions?

We have almost no answers to these questions.

“We don’t have the most basic assurances of algorithmic accountability or transparency, such as accuracy, explainability, fairness, and auditability. Platforms use machine-learning algorithms that are proprietary and shielded from any review,” wrote WITNESS’ Dia Kayyali in a December 2018 blogpost.

The proposal’s critics argue that forcing all service providers to rely on automated tools in their efforts to crack down on terrorist and extremist content, without transparency and proper oversight, is a threat to freedom of expression and the open web.

The UN special rapporteurs on the promotion and protection of the right to freedom of opinion and expression; the right to privacy; and the promotion and protection of human rights and fundamental freedoms while countering terrorism have also expressed their concerns to the Commission. In a December 2018 memo, they wrote:

‘’Considering the volume of user content that many hosting service providers are confronted with, even the use of algorithms with a very high accuracy rate potentially results in hundreds of thousands of wrong decisions leading to screening that is over — or under — inclusive.’’

In recital 18, the proposal outlines measures that hosting service providers can take to prevent the dissemination of terror-related content, including the use of tools that would “prevent the re-upload of terrorist content.” Commonly known as upload filters, such tools have been a particular concern for European digital rights groups. The issue first arose during the EU’s push for a Copyright Directive, that would have required platforms to verify the ownership of a piece of content when it is uploaded by a user.

“We’re fearful of function creep,’’ Evelyn Austin from the Netherlands-based digital rights organization Bits of Freedom said at a public conference.

‘’We see as inevitable a situation in which there is a filter for copyrighted content, a filter for allegedly terrorist content, a filter for possibly sexually explicit content, one for suspected hate speech and so on, creating a digital information ecosystem in which everything we say, even everything we try to say, is monitored.’’

Austin pointed out that these mechanisms undercut previous strategies that relied more heavily on the use of due process.

‘’Upload filtering….will replace notice-and-action mechanisms, which are bound by the rule of law, by a process in which content is taken down based on a company’s terms of service. This will strip users of their rights to freedom of expression and redress…’’

The draft EU proposal also applies stiff financial penalties to companies that fail to comply. For a single company, this can amount to up to 4 percent of its global turnover from the previous business year.

French digital rights group La Quadrature du Net offered a firm critique of the proposal, and noted the limitations it would set for smaller websites and services:

‘’From a technical, economical and human perspective, only a handful of providers will be able to comply with these rigorous obligations – mostly the Web giants.

To escape heavy sanctions, the other actors (economic or not) will have no other choice but to close down their hosting services.’’

“Through these tools,” they warned, “these monopolistic companies will be in charge of judging what can be said on the Internet, on almost any service.”

Indeed, worse than encouraging “self-regulation,” the EU proposal will take us further away from a world in which due process or other publicly-bound mechanisms are used to decide what we say and see online, and push us closer to relying entirely on proprietary technologies to decide what kinds of content is appropriate for public consumption — with no mechanism for public oversight.