RDR pilot study underscores the need for rights-based standards in targeted advertising and algorithmic systems

Share Article

Shutterstock

Algorithms now shape nearly every facet of our digital lives. They collect and process vast amounts of user data, compiling sophisticated profiles about every user. They categorize us according to our demographics, behaviors, location data. They also make assumptions about our likes and dislikes, also known as inferred data. Then they monetize our digital dossiers for advertisers to bid on as they tick boxes to pick characteristics of the people they want to target. 

At least we think that’s what algorithms do, based on what we’ve been able to learn from research and reporting. But most of us are still in the dark about exactly how they do it.

According to new findings in a pilot study released by RDR this week, not one of eight U.S. and European companies evaluated disclosed how they develop and train their algorithmic systems. This means that every piece of promoted or recommended content, and every ad we encounter, appears on our screen as the result of a process and a set of rules no one but the company can see. These processes not only pose significant risks to privacy—particularly when companies collect data and make inferences about users without their knowledge or consent—but can also result in discriminatory outcomes if algorithmic systems are based on biased data sets. 

Funded by the Open Society Foundations’ Information Program, the study was part of RDR’s ongoing work to include questions related to targeted advertising and algorithmic systems in its methodology for the RDR Corporate Accountability Index. The companies evaluated were U.S. digital platforms Apple, Facebook, Google, Microsoft, and Twitter, and European telecom companies Deutsche Telekom, Telefónica, and Vodafone.

The pilot study evaluated the companies’ transparency about their use of targeted advertising and algorithmic systems based on a set of draft indicators developed by RDR last year. Generated from real-world human rights risk scenarios (detailed documents here and here) and grounded in international human rights frameworks, the indicators set standards for how companies should disclose policies and practices related to targeted advertising and algorithmic systems as well as how they should govern such practices and assess the risks they pose to human rights.

The results of the pilot study reveal multiple shortcomings across all companies. In addition to no disclosures on the development and training of algorithmic systems, companies did not disclose whether or how users can control how their information is used or the categories they are sorted into. While most companies disclosed some information around their targeting rules, no company disclosed any data about what actions users can take to remove ad content that violates these rules, making it impossible to hold companies accountable for their own terms of service.

In the realm of corporate governance, European telecoms led in making explicit public commitments to respect human rights as they develop and use algorithmic systems. Among U.S. companies, only Microsoft disclosed whether it conducts risk assessments on the impact on free expression and privacy of their development and use of algorithmic systems. No company in this pilot disclosed if they conduct risk assessments on their use of targeted advertising. 

Companies also showed little to no commitment to informing users about the potential human rights harms associated with algorithmic systems and targeted ads.

The pilot findings will help RDR determine which of the draft indicators to finalize and include in the updated methodology for the 2020 RDR Index. The findings also establish a baseline against which we can measure company improvements even before the next RDR Index is released. Further, the pilot findings offer a glimpse of the transparency and accountability challenges that tech companies have yet to address with regard to targeted advertising and algorithmic systems and provide an important benchmark for the road ahead.

Finally, the pilot findings also informed RDR’s new policy report, “It’s Not Just the Content, It’s the Business Model: Democracy’s Online Speech Challenge.” The first in a two-part series aimed at U.S. policymakers and anybody concerned with the question of how internet platforms should be regulated, the report is set for release tomorrow. Part two, which will focus on corporate governance of targeted advertising and algorithmic systems, will be out later this spring.

We welcome input or feedback about research presented in this study or the methodology at methodology@rankingdigitalrights.org.

Highlights

A decade of tech accountability in action

Over the last decade, Ranking Digital Rights has laid the bedrock for corporate accountability in the tech sector by demanding transparency from both Big Tech and Telco Giants.

RDR Series:
Red Card on Digital Rights

A story of control, censorship, and state surveillance during the FIFA World Cup in Qatar

Related Posts

Sign up for the RADAR

Subscribe to our newsletter to stay in touch!