RDR’s Submission to the Aspen Disorder Prize: Making Online Ads Accountable

Share Article

Throughout 2021, with the U.S. midterm elections on the horizon, the Aspen Institute convened the Aspen Commission on Information Disorder, which culminated in a report detailing 15 recommendations for stakeholders (including civil society) to address misinformation and the “crisis of faith in key institutions.” To advance this effort, the Aspen Tech Policy Hub launched the “Information Disorder Prize Competition” to fund projects that “alleviate the crisis of mis- and disinformation in America.”

RDR is proud to have been selected as one of four semi-finalists for our proposed project “Treating Information Disorder by Making Online Ads Accountable.” Our project prototype focused on Meta and Twitter, two global tech giants that derive almost all their revenue from targeted advertising and have an outsized influence on global politics and democracy. We wanted to understand how each company identifies the potential risks that its targeted advertising business could create, enable, or amplify; how it prevents, mitigates, or remedies these harms; and how it communicates to its users, customers, investors, and policymakers about governing these risks.

Having tracked both companies and their governance of human rights risks for years, we expected to find a lack of transparency that would, in turn, suggest deficiencies in the companies’ policies and practices safeguarding the quality of our information. Such deficiencies would imply that companies don’t have an economic interest in self-regulating to mitigate potential harms to democracy and human rights, pointing to a clear need for regulatory intervention.

Our project shed light on Meta and Twitter’s existing systems for governing ads by defining what a truly responsible system would look like, creating “model” policies to illustrate the norms we proposed, and assessing Meta and Twitter’s disclosed policies against our standards. Our analysis revealed key gaps in terms of both substance and company disclosures. Had we won the competition’s grand prize, we would have expanded the project to include video-sharing platforms like YouTube and ad networks like Google Ads alongside social media sites, ultimately issuing public scorecards to hold these adtech platforms accountable for their role in spreading messages that destabilize democracy and undermine human rights.

To Tackle Disinformation, We Need Accountability for Targeted Advertising

In the days following the victory of former Brazilian President Lula da Silva over the country’s far-right leader Jair Bolsonaro in October this year, protestors took to the streets and even blocked off roads, convinced that Lula’s victory was the result of a fraudulent vote. To election observers like those at the Carter Center, this wasn’t surprising. The organization had already noted that the Brazilian election was “marked by disinformation networks” pushing the idea of a flawed voting system that supposedly favored the left.

To many Americans, this story might sound eerily familiar: After all, spreading disinformation that calls into doubt the validity of elections is at the core of every authoritarian playbook. Yet, ahead of the November 2022 midterm elections in the U.S., companies like Meta and Twitter notably failed to improve on policies that had aided the proliferation of false information about the integrity of the 2020 election results. Meanwhile, Elon Musk’s dismissal of Twitter employees in charge of election integrity just two weeks after he took the company private only heightened fears of an increase in the spread of hateful and false content in the lead-up to the vote. In the end, election deniers fared poorly in competitive races in the U.S. midterms, but the fact remains that Twitter currently has almost no capacity to combat election disinformation.

Despite longstanding claims by the surveillance advertising industry that its business model— which incentivizes the creation and spread of polarizing content in exchange for views and, thus, ad dollars—does not contribute to our current crisis of global democracy, and is compatible with human rights, our research at RDR has shown this to be untrue. The business model is, in fact, at the heart of the problem. It results in the extensive collection of data as well as in revenue-maximizing algorithms. The business model therefore means the prioritization of the most sensational and controversial content, with strongly negative impacts on the quality of information shared.

We at RDR have long argued that, besides clear privacy violations, the system’s purpose—its value proposition to advertisers—is to discriminate among potential “targets” in order to more effectively influence their behavior as citizens and as consumers. We advocate for de facto abolishing surveillance advertising through federal privacy legislation, FTC rule-making, and other regulatory interventions. At the same time, we believe in taking a harm-reduction approach by improving the governance and oversight of the targeted-advertising ecosystem as it currently exists. That requires taking stock of this ecosystem and analyzing what improvements are needed, as we did in our project for the Aspen Prize.

Back in September, as Lula da Silva was entering the final stretch of his electoral campaign against Bolsonaro in Brazil, SumOfUs became the second civil society organization in two months to call out Meta, the parent of Facebook, for failing to crack down on ads spreading disinformation in the country ahead of the vote. The report found a total of 56 ads containing disinformation, viewed by 3 million people.

Yet just one month earlier, in August, Meta had released a set of policies for addressing, ostensibly, this very problem, explaining that the company was indeed “preparing for Brazil’s 2022 election.” Meta promised, among other things, to “prohibit ads calling into question the legitimacy of the upcoming election” and to “protect the integrity of presidential elections.” But, instead, SumOfUs found an “ecosystem of content seeking to undermine the electoral process” on Brazil’s Facebook pages a mere weeks later. That same month, Global Witness tested the platform and found that Brazilian Portuguese ads they submitted containing election-related disinformation were accepted for publication by Facebook. 

It is clear that moderating content, including of advertising, after it has already been posted has always been an insufficient—albeit necessary—response to the scope of the problem of disinformation. Company efforts to address the negative effects of their core business operations do not, unfortunately, change the system or the incentives that drive it. 

The Methodology

At RDR, we believe that transparency is the first step toward accountability and evidence-based policymaking. The cornerstone of our work is the Corporate Accountability Index, a set of scorecards assessing Big Tech companies and Telco Giants on their policies and practices affecting human rights, notably freedom of expression and information as well as privacy. The scorecards are grounded in indicator-based research: We look for policies that codify best practices recognized as protecting and promoting human rights. (Read more about the RDR indicators.) How well companies meet the expectations set forth in each indicator (which cover three categories: governance, freedom of expression and information, and privacy) determines their score and their rank in our evaluations.

For this project, we developed and then applied ten new indicators, to determine whether a company is governing its advertising systems in a responsible way, along with six existing ones from our established methodology, to Meta and Twitter’s ad businesses. We looked at whether the rules for ad content and targeting are public, what those rules actually say, how the company enforces these rules, their respect for data privacy, and whether the company does due diligence to make sure its advertising products won’t cause or contribute to human rights violations. These indicators also helped us determine whether a company is being transparent enough about its advertising systems, including its policy enforcement. The indicators we used helped us provide a roadmap for companies, concrete advocacy targets for civil society, and a framework for policymakers as they consider regulatory interventions. 

Analyzing Meta and Twitter’s Current Ad Governance Systems

Both companies earned failing scores, with Meta getting 47% of possible points and Twitter 43%. This difference is owed, primarily, to Meta having an advertising transparency database in place, the Meta Ads Library (which allows anyone to view some key information about the ads currently running on Meta platforms), and offering an appeals mechanism when it rejects ads, which Twitter does not appear to have. Meta’s higher score is notable in light of our 2022 Big Tech Scorecard, which placed Twitter at the top of all other digital platforms thanks to its high score on freedom of expression. 

In the 2022 Big Tech Scorecard, we found that Twitter was much more forthcoming about its content moderation policies and practices than its competitors, though we don’t know if that will continue under new head Elon Musk. Yet, as it relates to advertising, the information it shared left a lot to be desired. Below we summarize the results:

First, the good news: Both companies published rules for ad content and targeting—though some were difficult to find—and explained the processes and technologies they used to enforce those rules. They both banned hate speech, incitement to violence, and other discriminatory content, as well as fraudulent or misleading statements, as part of their advertising content rules. The companies also banned ads for some services that often prey on people’s personal hardships, such as bail bonds or payday loans. Both placed some limits on ad targeting based on protected characteristics and certain types of sensitive data, such as health conditions or status, sexual orientation, religious leanings, and political views

Now, the bad news: Neither company published any data about the number or nature of actions—like rejecting or taking down an ad—taken to enforce their advertising rules, despite doing so for non-ad content. Unsurprisingly, given their reliance on surveillance advertising, both companies were opaque about how they process, use, share, and retain information about users, leaving them with great latitude to invade user privacy. Both companies suggested that advertisers use algorithmic optimization—an automated process to guess who is most likely to click on an ad—to narrow an ad’s audience beyond the specified parameters. This practice has been linked to illegal discrimination, notably in the National Fair Housing Alliance et al. v. Facebook, Inc. case. It also helps direct paid disinformation to the users who are most likely, based on data inferred by algorithms, to engage with it. Meta’s Ad Library does not include enough information to understand the decisions made by its algorithmic optimization. 

What Does This Mean for Our Rights?

As anticipated, both Meta and Twitter fell short of our expectations in terms of policy substance and transparency. For the past decade and a half, policymakers and the public have focused on the governance of noncommercial user speech almost exclusively. Government involvement in this area is hampered by international free expression standards and, even more so, the First Amendment of the U.S. Constitution. The American public debate has grown particularly toxic lately, with proponents of transparent, nuanced, and accountable content moderation facing off against those who reject any limits to online speech. This debate is now playing out on several fronts, notably at the U.S. Supreme Court and in Elon Musk’s recent corporate takeover of Twitter. 

In the absence of sustained pressure to get serious about advertising governance, platforms have had free rein to grow so big that human moderation can’t keep up, thus leaving imperfect algorithmic systems to act as the primary arbiters of the paid influence economy. Advertisers and propagandists can run entire global influence campaigns without having to discuss their ad content or targeting parameters with a human being. Trying to govern the internet without governing online ads is a fool’s errand.

The pathologies of our global digital ecosystem are intertwined with advertising-supported platform business models: Harms are in the ads themselves, in the discrimination that ad targeting enables, and in the incentives they create for platforms’ algorithmic recommendation systems to fill our feeds with disinformation, among other things. These business models also provide revenue for propaganda-for-profit outlets, often without the advertisers’ knowledge. A recent investigation by ProPublica found that Google frequently violates its own policy of not placing ads on content making “unreliable or harmful claims.” This is particularly true when that content isn’t in English. For example, in May 2021, an ad for the American Red Cross appeared on a far-right German website that downplayed the pandemic, comparing COVID-19 to the flu. The American Red Cross was forced to explain that the ad was placed automatically, without their control.

Conclusion

It is clear that we need a new set of norms for the adtech sector that, if upheld, will thwart malicious influence campaigns and disinformation-for-profit operations. The norms that are needed would deal with commercial activity rather than private speech and could form the basis for legislation, FTC rulemaking, or other forms of regulation.

Meta and Twitter are far from the only players here, and advertising-funded social media platforms aren’t the only types of companies begging for oversight. Our pitch to the competition also proposed evaluations of ad exchanges, the intermediaries that facilitate ad auctions for websites all around the internet, as well as ad-supported video-sharing sites like YouTube. Finally, we need to combat the sale of advertisements that, while they may be unobjectionable themselves, provide a revenue stream for public figures like Alex Jones who purposely distribute disinformation and undermine democracy.

In pushing for transparency and accountability in digital advertising, we hope to make it easier for public-interest advocates from all sectors to follow money and information as it flows through this complex influence machine. Since 2020, RDR has included targeted advertising policies in our company evaluations. Additionally, we’ve recently begun monitoring the myriad forms through which telecommunication companies are also involved in targeted advertising. Though often less discussed than platforms, their involvement will likely have important implications for our democracies that remain too often unexplored. Our Telco Giants Scorecard shines further light on the advertising businesses of mobile operators and ISPs around the world. 

By continuing to uncover more information about the ad systems that have such huge impacts on our society and information ecosystems, we can better equip ourselves for the immense challenge of pushing back against resurgent authoritarianism and of strengthening our democracies.

Highlights

A decade of tech accountability in action

Over the last decade, Ranking Digital Rights has laid the bedrock for corporate accountability in the tech sector by demanding transparency from both Big Tech and Telco Giants.

RDR Series:
Red Card on Digital Rights

A story of control, censorship, and state surveillance during the FIFA World Cup in Qatar

Related Posts

Sign up for the RADAR

Subscribe to our newsletter to stay in touch!