Original art by Paweł Kuczyński.

From friends’ status updates, to messages from political candidates, to regular old ads, what we see on the internet today is rarely left to chance. Online and on our digital devices, technology companies track our every move, collecting troves of information about us that can be used to influence what we buy, how we vote, and much more. The technologies underlying these processes — targeted advertising and algorithmic systems — pose critical threats to users’ rights to privacy, free expression, and access to information.

Today, we are excited to release our newly updated methodology, which aims to address the increasingly complex human rights threats posed by algorithms and ad targeting technologies. We also have expanded our index to evaluate new types of services offered by Amazon and Alibaba.

Developed over more than a year of research, pilot testing, and stakeholder consultations, our new indicators set global accountability and transparency standards grounded in the Universal Declaration of Human Rights. As always, our indicators demonstrate how tech companies can respect and protect human rights online as they develop and deploy new technologies.

In the months ahead, we will be conducting research for the 2020 RDR Index, which will rank 26 companies using our updated indicators. More than 30 researchers around the world will participate in this rigorous process of data collection, verification, cross checking, and review. We plan to publish our results in February 2021.

Learn more:

Stay tuned for more updates on our work!

Original art by Paweł Kuczyński

As the country struggles to respond to COVID-19 and the 2020 elections approach, misinformation on social media abounds, posing a public health threat and a threat to our democracy. In RDR’s new report, “Getting to the Source of Infodemics: It’s the Business Model,” RDR Senior Policy Analyst Nathalie Maréchal, Director Rebecca MacKinnon, and I examine how we got here and what companies and the U.S. Congress can do to curb the power of targeted advertising to spread misinformation.

Targeted advertising relies on the processing of vast amounts of user data, which is then used to profile and target users without their clear knowledge or consent. While some other policy proposals focus on holding companies liable for their users’ online speech, our report calls for getting to the root of the problem: We describe concrete steps that  Congress can take to  hold social media companies accountable for their targeted advertising business model and the algorithmic systems that drive it.

This is the second in our two-part series on targeted advertising and algorithmic systems. The first report, “It’s Not Just the Content, It’s the Business Model: Democracy’s Online Speech Challenge,” written by Maréchal and journalist and digital rights advocate Ellery Roberts Biddle, explained how algorithms determine the spread and placement of user-generated content and paid advertising and why forcing companies to take down more content, more quickly is ineffective and would be disastrous for free speech.

In this second part of our series, we argue that international human rights standards provide a framework for holding social media platforms accountable for their social impact that complements existing U.S. law and can help lawmakers determine how best to regulate these companies without curtailing users’ rights.

Drawing on our five years of research for the Ranking Digital Rights (RDR) Corporate Accountability Index, we point to concrete ways that the three social media giants have failed to respect users’ human rights as they deploy targeted advertising business models and algorithmic systems. We describe how the absence of data protection rules enables the unrestricted use of algorithms to make assumptions about users that determine what content they see and what advertising is targeted to them. It is precisely this targeting that can result in discriminatory practices as well as the amplification of misinformation and harmful speech. We then present concrete areas where Congress needs to act to mitigate the harms of misinformation and other dangerous speech without compromising free expression and privacy: transparency and accountability for online advertising, starting with political ads; federal privacy law; and corporate governance reform.

First, we urge U.S. policymakers to enact federal privacy law that protects people from the harmful impact of targeted advertising. Such a law should ensure effective enforcement by designating an existing federal agency, or create a new agency, to enforce privacy and transparency requirements applicable to digital platforms. The law must include strong data-minimization and purpose limitation provisions. This means, among other things, that users should not be able to opt-in to discriminatory advertising or to the collection of data that would enable it. Companies must also give users very clear control over collection and sharing of their information. Congress should restrict how companies are able to target users, including prohibiting the use of third-party data to target specific individuals, as well as discriminatory advertising that violates users’ civil rights.

Second, Congress should require that platforms maintain a public ad database to ensure compliance with all privacy and civil rights laws when engaging in ad targeting. Legislators must break the current deadlock and pass the Honest Ads Act, expand the public ad database to include all advertisements, and allow regulators and researchers to audit it.

Finally, Congress should require relevant disclosure and due diligence around the social and human rights impact of targeted advertising and algorithmic systems. This means mandating disclosure of targeted advertising revenue, along with disclosure of environmental, social, and governance (ESG) information, including information relevant to the social impact of targeted advertising and algorithmic systems.

Political deadlock in Washington, D.C., has closed the window for lawmakers to act in time for the November 2020 elections, but this issue must be a bipartisan priority in future legislative sessions. In the meantime the companies should take immediate, voluntary steps to anticipate and mitigate the negative impact of targeted advertising and related algorithmic systems on the upcoming elections: We call on Facebook, Google, and Twitter to curtail political ad targeting between now and the November elections in order to dramatically reduce the flow and impact of election-related disinformation and misinformation on social media.

Please read the report, join the conversation on Twitter using #itsthebusinessmodel, and email us at itsthebusinessmodel@rankingdigitalrights.org with your feedback and to request a webinar for your organization.

We would like to thank Craig Newmark Philanthropies for making this report possible.

Image from Shutterstock: Mobile
Shutterstock

In early 2019, RDR began the process of revising and expanding the methodology for the 2020 RDR Corporate Accountability Index to address the human rights harms associated with companies’ use of targeted advertising and algorithmic systems, and to widen our scope to include new types of services offered by Amazon and Alibaba. Since then, we have published a set of draft indicators that address companies’ targeted advertising policies and practices and their development and use of algorithmic systems, and we released the results of a pilot study of these indicators.

Today, we are excited to share a draft of the full 2020 RDR Index methodology and open it for public feedback. In order to facilitate this input, we have broken down the consultation documents into three PDFs that can be downloaded from the following links:

Feedback on the draft methodology should be sent to methodology@rankingdigitalrights.org by May 15, 2020.

Our team will also conduct focused outreach to companies, human rights experts, civil society organizations, and other key stakeholders in the coming weeks. These consultations will help inform our decisions regarding the final 2020 RDR Index methodology.

We plan to publish the final 2020 RDR Index methodology, including the list of companies and services that will be included in the ranking, in early June.

We welcome input from all stakeholders!

Original art by Paweł Kuczyński

As the 2020 U.S. presidential campaign continues amid a pandemic with no precedent in living memory, politicians on both sides of the aisle are understandably eager to hold major internet companies accountable for the spread of disinformation, hate speech, and other problematic content. Unfortunately, their proposals focus on pressuring companies to purge their platforms of various kinds of objectionable content, including by amending or even revoking Section 230 of the 1996 Communications Decency Act, and do nothing to address the underlying cause of dysfunction: the surveillance capitalism business model.

Today we’re publishing a new report, “It’s Not Just the Content, It’s the Business Model: Democracy’s Online Speech Challenge,” that explains the connection between surveillance-based business models and the health of democracy. Written by RDR Senior Policy Analyst Nathalie Maréchal and journalist and digital rights advocate Ellery Roberts Biddle, the report argues that forcing companies to take down more content, more quickly is ineffective and would be disastrous for free speech. Instead, we should focus on the algorithms that shape users’ experiences.

In the report, we explain how algorithms determine the spread and placement of user-generated content and paid advertising, but they share the same logic: showing each user the content they are most likely to engage with, according to the algorithm’s calculations. Another type of algorithm performs content moderation: the identification and removal of content that breaks the company’s rules. But this is no silver bullet, as these tools are unable to understand context, intent, and other factors that are key to whether a post or advertisement should be taken down.

We outline why today’s technology is not capable of eliminating extremism and falsehood from the internet without stifling free expression to an unacceptable degree. While we accept that there will never be a perfect solution to these challenges, especially not at the scale at which the major tech platforms operate, we assert that if they changed the systems that decide so much of what actually happens to our speech (paid and unpaid alike) once we post it online, companies could significantly reduce the prevalence of disinformation and hateful content.

At the moment, determining exactly how to change these systems requires insight that only the platforms possess. Very little is publicly known about how these algorithmic systems work, despite their enormous influence on our society. If companies won’t disclose this information voluntarily, Congress must intervene and insist on greater transparency, as a first step toward accountability. Once regulators and the American public have a better understanding of what happens under the hood, we can have an informed debate about whether to regulate the algorithms themselves, and if so, how.

This report is the first in a two-part series and relies on more than five years of research for the RDR Corporate Accountability Index as well as the findings from a just-released RDR pilot study testing draft indicators on targeted advertising and algorithmic systems.

The second installment, to be published later this spring [now available here], will examine two other regulatory interventions that would help restructure our digital public sphere so that it bolsters democracy rather than undermines it. First, national privacy legislation would blunt the power of content-shaping and ad-targeting algorithms by limiting how personal information can be used. Second, requiring companies to conduct human rights impact assessments about all aspects of their products and services—and to be transparent about it—will help ensure that they consider the public interest, not just their bottom line.

We had to cancel our planned launch event due to the novel coronavirus, but we’ll be organizing webinars to discuss why we think #itsthebusinessmodel we should pay attention to, not just the content.

Please read the report, join the conversation on Twitter using #itsthebusinessmodel, and email us at itsthebusinessmodel@rankingdigitalrights.org with your feedback and to request  webinar for your organization.

We would like to thank Craig Newmark Philanthropies for making this report possible.

Shutterstock

Algorithms now shape nearly every facet of our digital lives. They collect and process vast amounts of user data, compiling sophisticated profiles about every user. They categorize us according to our demographics, behaviors, location data. They also make assumptions about our likes and dislikes, also known as inferred data. Then they monetize our digital dossiers for advertisers to bid on as they tick boxes to pick characteristics of the people they want to target. 

At least we think that’s what algorithms do, based on what we’ve been able to learn from research and reporting. But most of us are still in the dark about exactly how they do it.

According to new findings in a pilot study released by RDR this week, not one of eight U.S. and European companies evaluated disclosed how they develop and train their algorithmic systems. This means that every piece of promoted or recommended content, and every ad we encounter, appears on our screen as the result of a process and a set of rules no one but the company can see. These processes not only pose significant risks to privacy—particularly when companies collect data and make inferences about users without their knowledge or consent—but can also result in discriminatory outcomes if algorithmic systems are based on biased data sets. 

Funded by the Open Society Foundations’ Information Program, the study was part of RDR’s ongoing work to include questions related to targeted advertising and algorithmic systems in its methodology for the RDR Corporate Accountability Index. The companies evaluated were U.S. digital platforms Apple, Facebook, Google, Microsoft, and Twitter, and European telecom companies Deutsche Telekom, Telefónica, and Vodafone.

The pilot study evaluated the companies’ transparency about their use of targeted advertising and algorithmic systems based on a set of draft indicators developed by RDR last year. Generated from real-world human rights risk scenarios (detailed documents here and here) and grounded in international human rights frameworks, the indicators set standards for how companies should disclose policies and practices related to targeted advertising and algorithmic systems as well as how they should govern such practices and assess the risks they pose to human rights.

The results of the pilot study reveal multiple shortcomings across all companies. In addition to no disclosures on the development and training of algorithmic systems, companies did not disclose whether or how users can control how their information is used or the categories they are sorted into. While most companies disclosed some information around their targeting rules, no company disclosed any data about what actions users can take to remove ad content that violates these rules, making it impossible to hold companies accountable for their own terms of service.

In the realm of corporate governance, European telecoms led in making explicit public commitments to respect human rights as they develop and use algorithmic systems. Among U.S. companies, only Microsoft disclosed whether it conducts risk assessments on the impact on free expression and privacy of their development and use of algorithmic systems. No company in this pilot disclosed if they conduct risk assessments on their use of targeted advertising. 

Companies also showed little to no commitment to informing users about the potential human rights harms associated with algorithmic systems and targeted ads.

The pilot findings will help RDR determine which of the draft indicators to finalize and include in the updated methodology for the 2020 RDR Index. The findings also establish a baseline against which we can measure company improvements even before the next RDR Index is released. Further, the pilot findings offer a glimpse of the transparency and accountability challenges that tech companies have yet to address with regard to targeted advertising and algorithmic systems and provide an important benchmark for the road ahead.

Finally, the pilot findings also informed RDR’s new policy report, “It’s Not Just the Content, It’s the Business Model: Democracy’s Online Speech Challenge.” The first in a two-part series aimed at U.S. policymakers and anybody concerned with the question of how internet platforms should be regulated, the report is set for release tomorrow. Part two, which will focus on corporate governance of targeted advertising and algorithmic systems, will be out later this spring.

We welcome input or feedback about research presented in this study or the methodology at methodology@rankingdigitalrights.org.