We can’t govern the internet without governing online advertising. Here’s how to do it.

By Nathalie Maréchal

Return to the Scorecard

 

We’ve been saying it for a while, but it bears repeating: the social ills that we associate with digital platforms—hate speech, disinformation, election interference, and more—are fundamentally connected to the surveillance advertising business model that fuels companies like Alphabet, Meta, and Twitter. We are thrilled to see that more and more people, including policymakers, share our analysis. 

But it’s one thing to diagnose the problem; it’s another to fix it. In this essay, I lay out RDR’s prescription for fixing online advertising, and make a case for how this could help solve some of the problems with regular online content, too.

The problem with online ads

Digital networks have fundamentally changed the game of advertising. Networked technologies enable anyone who wants to buy or sell an ad, or serve as an intermediary to these functions, to target consumers with unprecedented precision. 

As social media companies’ targeting technologies become more precise, people increasingly have the eerie sensation that their phones are spying on them. Newly-pregnant women start seeing ads for maternity clothes before they’ve shared their news with loved ones. Some voters see ads that help inform their choices at the polls, while others see ads telling them to go to the polls on the wrong day.

Mass privacy violations and automated discrimination are cornerstones of what we now call surveillance advertising (also known as behavioral or programmatic advertising). This multi-billion dollar global industry, estimated to reach U.S. $786.2 billion by 2026, is still a new frontier of the digital world, an all but lawless Wild West where a few major industry players have seized territory for their own gain. Together, Alphabet, Amazon, and Meta control half of the global digital ad market. Ads are fundamental drivers of revenue for all these companies, alongside thousands of others. 

Of course, online ads don’t just support tech behemoths – they are essential to democratic processes, the growth of small businesses, and the sustainability of digital journalism. The enduring role of online advertising in our media environments, and in our societies at large, makes it all the more important that we govern it in the public interest.

What we know – and what we don’t know

Online advertising is everywhere, yet poorly understood. Why do we see the ads we see? Who decides what kinds of content can appear in an ad? Who controls the mechanisms for buying and selling ads? How are all these decisions made? And what happens when someone breaks the rules?

Right now, we don’t have many answers to these critical questions. We do know a few things: Companies typically have rules for ad content and for ad targeting, but independent research suggests that they sometimes do a bad job at enforcing these rules. If companies divulged more about how they enforce their rules (and what technologies lie behind their processes), we’d know more. Our own research shows that among the industry’s leaders, there is virtually no transparency reporting about ad policy enforcement. (TikTok, which we don’t rank in the Big Tech Scorecard, discloses the raw number of rejected ads.)

We have good reason to believe that the major platforms are no better at moderating ads than they are at moderating normal user content. Journalists and activists now routinely test Meta’s ad moderation systems by trying to buy ads that blatantly run afoul of the platform’s rules for ad content and targeting. All too often, the ads are approved, whether they promote drinking bleach to prevent COVID-19, push false narratives about Russia’s invasion of Ukraine, or advocate acts of genocide against Rohingya Muslims.1

Ad targeting parameters also can introduce harmful and even illegal discrimination. In 2019, Meta (then Facebook) settled a class action lawsuit filed by civil rights groups after ProPublica showed that the company’s advertising systems effectively enabled racial discrimination by allowing real estate advertisers to choose which “ethnic affinity groups” they wanted to target and which ones they wanted to exclude.

Beyond social media platforms, there are ad networks (like Google Ads) that use algorithms to place ads on third-party websites. But this type of algorithmic targeting, with scant human oversight, has created scenarios in which unsuspecting advertisers’ products appear on websites peddling hate speech, disinformation, and other objectionable content. Groups like the Check My Ads Institute argue that defunding the entities behind such harmful content is a powerful way to combat it without undermining free expression, but their efforts are hindered by ad networks’ lack of transparency.

What would a responsible, accountable online ad ecosystem look like?

At Ranking Digital Rights, we believe that achieving the vision of a global internet that supports and sustains human rights will require a global online advertising ecosystem that does the same thing. But the companies that dominate the digital ad market have little incentive (apart from occasional reputational bruising) to change their ways. This means that it is high time for policymakers to constrain these corporate behaviors and abuses of power through law and regulation. 

We also believe that states can regulate ads in ways that they can’t, and shouldn’t, regulate ordinary user speech. In the U.S., home to most of the companies that have built and profited from the surveillance advertising regime that dominates the internet today, the right to free speech is firmly protected. But the law does not protect people’s rights to pay a company or intermediary to display or distribute their speech.

Policymakers should pursue a wholesale ban on surveillance advertising. Above all else, we believe that surveillance advertising—built on a foundation of privacy violations and discrimination by algorithm—must be banned in order to engineer a shift to an approach that respects human rights, such as contextual advertising. As crazy of an idea as this seemed as recently as two years ago, since the beginning of this year we’ve seen Silicon Valley’s representative in Congress introduce the Banning Surveillance Advertising Act and President Biden calling for a ban on targeting ads at kids. Under Chair Lina Khan, the Federal Trade Commission is poised to embark on a sweeping privacy rule-making process. And in Brussels, with the impending passage of the Digital Markets Act and Digital Services Act, the EU seems prepared to strengthen consent requirements and to ban all targeting to minors as well as the use of “sensitive data.” Things are far from settled, but it is clear that there is political will to take action against surveillance advertising, on both sides of the Atlantic. 

But banning surveillance, and thus ending the discriminatory targeting it enables, is only one part of governing online advertising. Regardless of whether, how, and on what timeline surveillance advertising is abolished, we still need rules of the road for the ads we see, where we see them, and who makes money from the transaction. 

 

With or without a total ban on surveillance advertising, policymakers should pass legislation that requires:

  • Corporate transparency: Companies should have transparent and well-enforced policies about ad content, ad targeting, where ads will appear (“brand safety”), who can purchase ads, and how prices are set. They should include data about ad policy enforcement in their transparency reporting. They should be compelled to disclose how they comply with various legal requirements related to advertising (including political advertising) in the countries where they show ads. They should also be required to report on their progress toward linguistic equity: companies that accept ads in a given language should be able to effectively moderate ads in that language.
  • Human rights due diligence and independent auditing: Companies should conduct human rights impact assessments on all of their ad policies and relevant enforcement processes. They should also enable independent researchers and regulatory bodies to access data about advertising, including data that can help them independently verify company claims about rule enforcement.
  • Appeal and remedy: All enforcement systems produce errors, which is why appeals and other remedy systems (including the ability to sue for damages) are essential. Advertisers should be able to appeal when their ads are incorrectly rejected, and public-interest regulators should create mechanisms to ensure that prohibited ads don’t make it through, for example by requiring external third-party audits of ad moderation systems.

When it comes to the actual content of online ads, we urge policymakers to consider the public interest and human rights impacts of ads in some of the areas we’ve mentioned above (political ads, ads that discriminate due to targeting systems) and work with independent experts to develop strong rules in this area. In the U.S., policymakers found a way to set boundaries for broadcast and print ad content in the 1980s, when the public health risks posed by tobacco advertisements forced the issue. Today, online political ads are still unregulated, despite repeated efforts to bring the Honest Ads Act to a vote. We have reached the point where policymakers may need to set boundaries for content in online advertising, but the specifics of what this might entail fall outside the scope of our expertise.  

This may sound risky at first – government attempts to address “online harms” through restrictions on internet users’ speech consistently run afoul of free expression protections like Article 19 of the UDHR/ICCPR, Article 11 of the EU Charter of Fundamental Rights, and the First Amendment of the US Constitution. But paid speech does not enjoy the same protections, and most countries limit print and broadcast advertising in various ways, though some of these laws do not extend to online ads. 

Regardless of their jurisdiction, policymakers should carry out human rights impact assessments on any proposed policies, and consider whether new laws are needed to protect human rights and democracy with respect to online ads. Part of that assessment should focus on whether some degree of intermediary liability for advertising would advance the public interest, while carefully considering unintended negative consequences for small businesses, media outlets, and other actors for whom online ads provide sustainability.

What will it take to get us there?

In theory, all governments can (and should!) regulate online advertising, but in practice, the US and EU have an outsized role to play, because of their market size and ability to influence legal frameworks beyond their borders. They are also home to most of the world’s dominant ad tech companies. Our policy focus has been on Washington because the majority of companies we are concerned about are headquartered in the U.S., but we are tracking developments in Brussels closely, particularly the trilogue negotiations surrounding the Digital Markets Act and the Digital Services Act.

Ideally, the US and the EU should ban surveillance advertising as part of a comprehensive 21st-century privacy and data protection framework. If a full ban on surveillance advertising isn’t feasible politically, at least in the short term, they should regulate data collection and targeting, and vigorously enforce the rules, realizing that the more complex the rules, the harder and more costly they will be to enforce. 

Finally, we suspect that the kinds of reforms we’re pushing for would create positive side effects for unpaid content, without threatening to compromise free expression as many proposals to regulate user-generated content do. First, disinformation-for-profit outfits would crumble without revenue from surveillance advertising. Second, ending constant data collection would weaken the recommendation algorithms that cause so much harmful content to go viral. And third, once companies have robust systems for governing ads—including AI content moderation tools that work across languages—they can repurpose them for unpaid content. Big Tech whistleblowers have shown us proof that there are no such systems for most of the world’s languages. 

The world’s most powerful tech companies have had years to tackle and invest in solving these problems, but instead they have hedged their bets, focusing mercilessly on the profits they reap from ads and organic content alike—and doing damage control only when things go awry. It is time for policymakers to step up and target ads for the harms they cause, instead of continuing to let them target us.

 

Return to the Scorecard

Sign up for the RADAR

Subscribe to our newsletter to stay in touch!