As heated controversies and debates continue to rage about how governments, companies, and citizens should respond to the problem of disinformation and hate speech on social media, a forthcoming report on Content Regulation in the Digital Age by David Kaye, U.N. Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, could not be more timely and important. As Kaye described it in his call for submissions to help inform the report, his aim is to examine the “impact of platform content regulation on freedom of expression and examine appropriate private company standards and processes and the role that States should play in promoting and protecting freedom of opinion and expression online.”
RDR submitted a paper (downloadable as PDF here) tying key findings and recommendations from the 2017 Corporate Accountability Index to many of the questions that Kaye aims to examine in his June 2018 report. The 2017 Index findings highlighted how, despite the important role that internet platforms such as social media and search services play in mediating public discourse, and despite recent progress by some companies in disclosing policies and actions related to government requests, the process of policing content on internet platforms remains unacceptably opaque. As a result we found that:
- Users of internet platforms cannot adequately understand how their online information environment is being governed and shaped, by whom, under what authorities, for what reason. When transparency around the policing of online speech is inadequate, people do not know who to hold accountable when infringements of their expression rights occur.
- This situation is exacerbated by the fact that some of the world’s most powerful internet platforms do not conduct systematic impact assessments of how their terms of service policies and enforcement mechanisms affect users’ rights.
- Furthermore, grievance and remedy mechanisms for users to report and obtain redress when their expression rights are infringed are woefully inadequate.
In light of these facts, we proposed the following recommendations for companies and governments:
1. Increase transparency of how laws governing online content are enforced via internet intermediaries and how decisions to restrict content are being made and carried out. Companies should disclose policies for decision making regarding content restrictions, whether at the request of governments, private actors, or carried out at the company’s own initiative to enforce its terms of service. They should also disclose data on the volume and nature of content being restricted or removed for the full range of reasons that result in restriction. Governments must encourage if not require such transparency and match it with transparency of their own regarding demands – direct as well as indirect – that they place upon companies to restrict content.
2. Broaden impact assessment and human rights due diligence in relation to the regulation and private policing of content. Companies must conduct human rights impact assessments that examine policies and mechanisms for identifying and restricting content, including terms of service enforcement and private flagging mechanisms. They must disclose how such assessments are used identify and mitigate any negative impact on freedom of expression that may be caused by these policies and mechanisms. Governments should also assess existing and proposed laws regulating content on internet platforms to ensure that they do not result in increased infringement of users’ freedom of expression rights.
3. Establish and support effective grievance and remedy mechanisms to address infringements of internet users’ freedom of expression rights. When content is erroneously removed or a law or policy is misinterpreted in a manner that results in the censorship of speech that should be protected under international human rights law, effective grievance and remedy mechanisms are essential to mitigating harm. Adequate mechanisms are presently lacking on the world’s largest and most powerful internet platforms. Governments seeking increased policing of extremist and violent content by platforms should not only support but participate in the development of effective grievance and remedy mechanisms.
Click here to download the full submission.