On January 20, Ranking Digital Rights (RDR) submitted a comment to the United Nations Educational, Scientific and Cultural Organization (UNESCO) expressing concerns about the organization’s proposed “model regulatory framework for the digital content platforms to secure information as a public good.” The draft will be further discussed this month at a UNESCO conference in Paris.
The proposed framework seeks to guide the development of national laws and regulations governing online speech on the largest platforms, including Meta and Twitter, while also proposing modes of self-regulation. Positively, it encourages regulation that requires content rules compatible with human rights, transparent processes for content moderation, and systematic risk assessments. It endorses the Santa Clara Principles on Transparency and Accountability in Content Moderation, which RDR helped develop.
RDR is an independent research program and human rights organization based at the think tank New America. We evaluate the policies and practices of the world’s most powerful tech and telecom companies and study their effects on people’s fundamental human rights, primarily through our yearly Corporate Accountability Index. Using this research, we push platforms hard to increase transparency and improve their respect for human rights. We have also conducted in-depth research on the role of the targeted advertising business model, a key driver of today’s massive proliferation of harmful content.
We, therefore, strongly share UNESCO’s concern about the prevalence of harmful content on digital platforms, including hate speech, harassment, doxxing, misinformation, and other types of content that is damaging to freedom of expression and information, privacy, and other human rights. Much of this content disproportionately harms marginalized groups, creating additional barriers to their participation in civic discourse.
We also want to call attention to important problems with the proposed framework, and its development process, which will hamper its usefulness as a tool for addressing these shared concerns. These problems include:
- Unclear mandate: It is not clear that development of this framework is within UNESCO’s mandate. Its development should therefore not proceed without a decision by the UNESCO General Conference, the organization’s chief decision-making body. UNESCO should also cooperate closely with the UN Office of the High Commissioner for Human Rights, which could ensure that the Framework does not inadvertently harm freedom of expression.
- Minimal consultation process: The draft was first published on December 19, 2022, with a deadline for public comment on January 20, 2023. This meant the public had only a month to provide their input, during a period when many people around the world are celebrating holidays. This truncated comment period disproportionately hampers organizations with limited resources, including those representing marginalized groups. Further, despite the fact that the framework is intended to be globally applicable, UNESCO does not seem to have proactively reached out to a diverse set of civil society stakeholders for input.
- Neglect of the role of targeted advertising: As RDR has documented, the incentives for the amplification of harmful content stem directly from the targeted advertising business model. Among other harms it facilitates, this model rewards content providers and advertisers for publishing content (paid and unpaid) designed to attract and keep users’ attention for as long as possible. This incentive perverts the value of the internet as a trusted information source by amplifying the most sensational and extremist content to generate page views and, thus, advertising revenue. The best way to address harms without encroaching on the right to freedom of expression is by protecting the data that is used to create such advertising, and by regulating how companies and advertisers are then able to transform this data in order to target messages and ads. At minimum, governments should require greater transparency and due diligence from tech companies about how ads are targeted and moderated. Unfortunately, the present draft framework calls only for encouraging—rather than enforcing—advertising transparency, and it does so only for political ads.
- Lack of attention to inferred data about users: To expand the data available for targeted ads and user-generated content, platforms also often algorithmically infer information about their users. This inferred information has a high risk of being inaccurate and biased. Regulation should therefore require enhanced transparency around algorithmic inference, and the framework should incorporate guidance for how to mitigate the effects of inferred data on the amplification of hate speech, disinformation, and other information harms it seeks to address. More areas of concern are discussed in our full comment.
We appreciate the intention of the framework’s draftees. We agree, however, with comments made by our civil society partners, such as the Global Network Initiative and Article 19, who have argued that its development should not proceed unless these issues are fully addressed.