RDR is now an independent initiative. Our website is catching up.  Read our announcement →

Edzell garden

Edzell Castle, Angus, Scotland. Photo by John Oldenbuck via Wikimedia Commons. CC BY-SA 3.0

By Veszna Wessenauer and Ellery Roberts Biddle

When Apple announced its plans to tighten restrictions on third-party tracking by app developers, privacy advocates—including us—were intrigued. The company seemed to be charting a new course for digital advertising that would give users much more power to decide whether or not advertisers could track and target them across the web. But we also wondered: What was in it for Apple?

Now we know. The company’s advertising business has more than tripled its market share since it rolled out the App Tracking Transparency (ATT) program in April 2021, which requires app developers to get user consent before tracking them across the web.

Apple has become so powerful that it has changed the rules of the game to its own benefit, and it is now effectively winning. The Financial Times reported in October that Apple’s ads now drive 58 percent of all downloads in the App Store, and more recently reported that ad revenues for major third-party app companies like Facebook and SnapChat have dropped by as much as 13 percent as a result.

It is clear that Apple’s move, alongside Google’s forthcoming transition to tracking people in “cohorts” rather than at the individual level, could shake up the uniquely opaque (but almost certainly icky) underworld of the internet that is ad tech. Every second we spend online, advertisers hawking everything from prescription drugs to political candidates compete for our attention. Internet companies use the ever-growing troves of information that they have about us, much of it gathered up with the use of third-party cookies, to sell ad slots to the highest bidder. Today there is a vast ecosystem of companies that carry out this particular function of using our data to enable targeted advertising. But now two of the industry’s biggest companies are shifting away from this model, albeit in different formats.

Although both companies say that they’re making these changes in order to better protect user privacy, the profit motives are clear, present, and enormous. While the changes may whittle away at the troves of data that so many digital companies have on us, they also will help to consolidate our digital dossiers in the hands of a few uniquely powerful platforms, and reduce or even eliminate many of the smaller players in the ecosystem.

If we’re really moving to a paradigm where first-party tracking dominates the sector, we have to ask: How might this shift affect people’s rights and matters of public interest? We know a lot about how these systems will affect people’s privacy, but what about other fundamental rights, like the right to information or non-discrimination?

Third-party tracking is now tied to some of the most insidious and harmful targeting practices around. With the help of a massive amount of third-party data—collected from third-party websites or apps through technical means such as cookies, plug-ins, or widgets—advertising can be hyper-personalized and tailored to consumer segments or even individuals. Political campaigns can target us to the point that they can swing an election, or tell us to go vote on the wrong day. Conspiracy theorists can capture vulnerable eyeballs and convince people that COVID-19 is a hoax. But it’s not entirely clear that the move away from third-party tracking will change these dynamics.

We can only know how good or bad these moves really are for users’ rights, and for society at large, if we know what’s happening to our data, and if companies give us some ability to decide who gets it and how they can use it. Unfortunately, neither Apple nor Google (nor any of the companies we evaluate) have ever met our standards for these kinds of disclosures.

This season, we’ve been studying this impending shift, assessing the motivations that seem to be driving Apple and Google to make these changes, and comparing companies’ public statements about their plans to their actual policies on things like algorithms and ad targeting. We are using our own standards to inform our understanding of how these changes will affect users’ rights, and what human rights-centric questions we should be asking Google as it rolls out its new “FLoC” system.

Apple is getting creepy

In 2020, Apple’s announcement of the ATT plan triggered loud public criticism from Facebook (now Meta). Most users access Meta’s services via mobile devices, many of which are owned and operated by Apple. This makes Apple the gatekeeper for any application available to iPhone or iPad users, Meta included.

A very public tête-à-tête soon ensued, much of which stemmed from an open letter that we at RDR wrote to Apple, pressing the company to roll out these changes on schedule in the name of increasing user control and privacy.
In response to our letter, Apple Global Privacy Lead Jane Horvath wrote that “tracking can be invasive and even creepy.” She singled out Meta, saying that the company had “made clear that their intent is to collect as much data as possible across both first- and third-party products to develop and monetize detailed profiles of their users.”
We stand by our original position, which was rooted in our commitment to user privacy and control. But we don’t want to see these things come at the expense of competition.

With the new system in place and its newly dominant position in the ad market, we have to ask: What if Apple engages in similarly “creepy” practices by exploiting the boatloads of first-party data it has on its users? It is worth noting that while Apple now requires developers to explicitly capture user consent for tracking (via “opting in”), Apple users are subject to a separate set of rules about how Apple collects and uses their data. If they want to use Apple’s products, they have no choice but to agree. Also, recent research by the Washington Post and Lockdown suggests that some iPhone apps are still tracking people via fingerprinting on iOS, even when they’ve opted out.

The public face-off between the companies helped to clarify what actual motivations may have driven the change on Apple’s part. The changes put the company in an even more powerful position to capture, make inferences about, and monetize our data. If its ad revenues since the change was implemented are any indication, the plan is working.
Apple has published policies acknowledging that it engages in targeted advertising. But there’s a lot missing from the company’s public policies and disclosures about how it treats our data.

 

  • Apple has published no public documentation explaining whether or how it conducts user data inference, a key ingredient in monetization of user data.
  • Apple discloses nothing about whether or not it collects user data from third parties via non-technical means.
  • Apple offers no evidence that it conducts human rights impact assessments on any of these activities.

When it comes to FLoC, what should we be asking Google?

Although it won’t debut until 2023, we have some details about Google’s “Federated Learning of Cohorts” aka FLoC, a project of Google’s so-called Privacy Sandbox initiative. The company describes the system as “a new approach to interest-based advertising that both improves privacy and gives publishers a tool they need for viable advertising business models.” What the company doesn’t say is that this new paradigm may actually shut out other advertising approaches altogether.

From what Google has said so far, we know that FLoC will use algorithms to put users into groups that share preferences. The system will track those groups, rather than allowing each of us to be individually tracked across the web. Advertisers will be able to show ads to Chrome users based on these cohorts, which will contain a few thousand people each. The cohorts will be updated weekly, to make sure that the targeting is still relevant and to reduce the possibility of users becoming identifiable at the individual level.

The Electronic Frontier Foundation’s Bennett Cyphers has noted that this weekly update will make FLoC cohorts “less useful as long-term identifiers, but it also [will make] them more potent measures of how users behave over time.” It is also worth noting that the system will also make it much easier to effectively use browser fingerprinting techniques that do enable individual-level targeting.

Learn more about FLoC with these explainers from EFF and RestorePrivacy.

It is important to understand that Google is not actually moving away from a targeted advertising business model. All we really know at this stage is that FLoC will constitute a move towards a paradigm where fingerprinting technology becomes much more powerful and possible to deploy, and where signature tracking techniques are algorithmically driven. If it’s anything like Google Search, or the company’s other products, we can expect to find very little public information on how these algorithms are built or deployed.

We also expect that it will become even more difficult to audit and hold the company accountable than was the case with cookies, which are easy to test for privacy violations. Google has made big promises about supporting and building a more open web. But from where we’re standing, FLoC looks like a new variation on the walled garden.

In fact, documents that were recently unsealed in a massive antitrust suit filed against Google charge that this is all an effort to shore up power in the online advertising market. The suit cites internal company documents saying that Project NERA, the precursor to the Privacy Sandbox, was meant to “successfully mimic a walled garden across the open web [so] we can protect our margins.” The unsealed documents also suggest that the “Privacy Sandbox” name and branding were rolled out in order to reframe the changes using privacy language, and to deflect public scrutiny. The court filings also provide a lot of support for the idea that Google’s main constituency here is advertisers, not users.

Will this really work? Does Google have enough data about us for this to be effective? In short, yes. Google can afford to shift to a system like FLoC precisely because of its monopoly status in the browser market alongside other key markets. Thanks to its preponderance of services—Chrome Browser, Gmail, Google Drive, Google Maps, and, of course, Android—the company has access to incredibly rich and sensitive user data at scale, second to no other company outside China. While Google’s business model relies heavily on advertising, it does not need to rely on third-party data in order to be an effective seller of ad space. With this transition, it could effectively cut out the third-party ad sellers altogether.
It’s also important to consider how this change will affect the broader market. We’re moving from a diverse (if unsavory) array of players in the ad tech underworld, to a paradigm that will concentrate profit and power in the hands of a powerful few. Google controls over two-thirds of the global web browser market. Once the Chrome browser starts blocking third-party cookies, most internet users will be using browsers without third-party cookies.
Although it will probably bring some benefits for users, the change is clearly bad news for many of the actors in the ad tech ecosystem that heavily rely on third-party data and for ad tech firms selling and buying this data. For firms that are not able to collect data on users (in the ways that Google, Apple, or Facebook can) the end of third-party cookies will either snuff out or force radical changes for their business models.

Here are our key questions for Google:

Will users be able to see what groups they belong to and on what grounds under FLoC? Google should make it clear to users what controls they have over their information and preferences under FLoC.

How will Google identify and address human rights risks in its development and implementation of FLoC? Beyond privacy, targeted advertising can pose risks to other rights, like rights of access to information or non-discrimination. If the company identifies problems in these areas, how will it address them?

Will Google stop collecting third-party data on its users through non-technical means when it starts blocking third-party cookies through its browser? Companies may acquire user information from third parties as part of a non-technical, contractual agreement as well. For example, Bloomberg reported in 2018 that Google buys credit card information from Mastercard in order to track which users buy a product that was marketed to them through targeted advertising. Such contractually acquired data can become an integral part of the digital dossier that a company holds on its users and it can form the basis for inferred user information.

Most companies say nothing about whether and how they acquire data through contractual agreements, we found in the 2020 RDR Index.

None of the companies disclosed what user information they collect from third parties through non-technical means.

Data from Indicator P9 in the 2020 RDR Index.

As these companies consolidate power over our data, what should digital rights advocates focus on?

The fact that Google and Apple—both of which have made public commitments to human rights—are trying to position themselves as champions of privacy due to the changes they introduced or are planning to introduce raises questions about whether these companies consider other risks associated with targeted advertising beyond privacy.
In the 2020 RDR Index we introduced standards on targeted advertising and algorithmic systems to address harms stemming from companies’ business models. None of the digital platforms we ranked in 2020 assess privacy or freedom of expression risks associated with their targeted advertising policies and practices. Facebook was the only company that provided some information on how it assesses discrimination risks associated with its targeted advertising practices, and this was limited in scope.

When we think of some of the long-term societal effects of targeted advertising, like disinformation around elections and matters of public health, these questions must be part of the equation. People need and deserve to have accurate information about how to protect their health in a pandemic. But we know from independent research and reporting that targeted ads have had an adverse impact on people’s ability to access such information. When it comes to elections, jobs, housing, and other fundamental parts of people’s lives, we also know that Big Tech companies have enabled advertising that discriminates on the basis of race, gender, and other protected characteristics. This is equally harmful. In some cases, it is a violation of U.S. law.

Will the move away from third-party cookies mean the end of tracking and targeting? Not likely. User data is still seen as an essential way to generate added value for digital platforms. Companies like Google and Facebook are digital gatekeepers and have their own walled gardens of (first-party) user data that no one else can see. Google claims that with the introduction of FLoC it will not be possible to target individuals anymore, but it is unclear whether and how it will process and infer users’ browser data to allocate them into cohorts.

None of the companies in the 2020 RDR Index provided clear information on their data inference policies and practices.

Companies disclosed nothing about the selected indicators.

Data from Indicators P7 and P3b in the 2020 RDR Index.

Are any of these changes going to alter company business models to better align with the public interest? In the case of Google, Chrome users will no longer have to contend with the opacity of third-party tracking. Rather than wondering what third parties might have their data, and how they’re using it, they will know that most of their data sits with Google.

But without more transparency from the company, it will be just as impossible to find out how Google uses our data, and how our data might serve advertisers seeking to do things like swing an election or promote anti-vaccine propaganda. The same will be true for Apple. Until both companies are forced to put this information out for public view, we will have about as little knowledge of (or control over) how our information is being used as we do now.

London street art. Photo by Annie Spratt. Free to use under Unsplash license.

This is the RADAR, Ranking Digital Rights’ newsletter. This special edition was sent on October 21, 2021. Subscribe here to get The RADAR by email.

Since the Wall Street Journal’s release of the Facebook Files and the subsequent debut of whistleblower Frances Haugen in the public conversation, we’ve seen a lot of pushback from Facebook. Company executives have claimed that Haugen didn’t have sufficient knowledge about the practices she brought to light, argued that the WSJ series “mischaracterized” Facebook’s approach, and attacked a network of journalists working on a series of follow-up reports drawing on the documents.

The company can obfuscate and deflect as it wishes, but the data Facebook is willing to release—and that which it keeps private—speaks for itself. Companies often wax poetic about the social and commercial benefits that they bring to people and businesses, but when it comes to their concrete effects on people’s lives and rights, policies and practices are what actually count. That is what RDR is here to measure. Although we have a strong focus on company policies, which establish a baseline for what they say they will do, we also ask companies to publish concrete evidence of their practices, with things like transparency reports.

Last week, we “cross-checked” Facebook, comparing company statements and policies with the Haugen revelations, and with our own data and findings since 2015. Again and again, we see that in areas where Facebook is most opaque about its practices, such as targeted advertising and use of algorithms to enforce ad content policies, the hard evidence laid out by Haugen and other whistleblowers like Sophie Zhang paint a troubling picture of how the company treats its users. As Haugen told the U.S. Congress a few weeks ago, profits do take priority over the public interest at Facebook.

Read “Cross-checking the Facebook files” →

If Facebook’s decisions are mainly driven by profit, then we need to follow the money. Facebook’s earnings reports show that at least 98% of the company’s revenue comes from advertising, and we know that ad sales on Facebook are driven by the company’s vast data collection machine. That’s why we’ve joined Fight for the Future’s call on Congress to pass federal privacy legislation. We hope our friends and allies will consider doing the same.

See our 2020 report card for Facebook →

RDR’s 2020 encryption scores for digital platforms. See full results.

State and corporate eyes are still watching us. So let’s encrypt!

Happy Global Encryption Day! At RDR, we push companies to encrypt user communications and private content so that users can control who has access to them. In our 2020 research, we found that some of the world’s biggest companies still have a very long way to go on encryption.

Since 2015, we’ve evaluated companies’ use of encryption by looking for evidence that they encrypt the transmission of user communications by default and using unique keys. We also look for evidence that the company allows users to secure their private content using end-to-end encryption, or full-disk encryption (where applicable), and ask if these things are enabled by default. The chart above shows digital platforms’ scores on our encryption indicator from 2020.

We observed a steep decline in encryption standards for the Russian companies that we evaluate, Yandex and Mail.Ru, owing to proposed regulations that would limit its use. While Mail.Ru (owner of VKontakte) never had especially strong practices in this area, search engine leader Yandex distinguished itself on encryption in years past, out-performing Google, Facebook, and Microsoft as recently as 2019.

Of course private companies like the ones we rank are only part of the equation. Companies specializing in surveillance software continue to reap huge profits from sales to government agencies that target legitimate criminal activity, but also people like activists and journalists who are working to hold their governments to account. Thanks to years of research by groups like The Citizen Lab and Amnesty International, and the more recent revelations around the broad-based use of NSO Group’s Pegasus software, there is more hard technical evidence in the public domain than ever before of how these technologies are used and who they harm.

This week, we are proud to support a letter to the U.N. Human Rights Council pushing members to mandate independent investigations of the sale, export, transfer, and use of surveillance technology like Pegasus. We also join civil society groups around the world, in a campaign organized by the Internet Society, to call on both governments and the private sector to enhance, strengthen, and promote use of strong encryption to protect people everywhere.

Global investors are calling on tech companies to implement our recommendations

A group of global investors with more than $6T in assets called on the 26 tech and telecom companies we ranked in the last RDR Corporate Accountability Index to commit to some of our high-level recommendations. In concert with our report, the Investor Alliance for Human Rights brought together nearly 80 investor firms to support this effort. The group calls on companies to:

  • implement robust human rights governance;
  • maximize transparency on how policies are implemented;
  • give users meaningful control over their data and data inferred about them;
  • and account for harms that stem from algorithms and targeted advertising.

RDR Media Hits

Tech Policy Press: Will creating third-party recommender systems or “middleware” solve content problems on Facebook? At a recent symposium hosted by Tech Policy Press, featuring Daphne Keller, Francis Fukuyama, and moderated by Richard Reisman, RDR Senior Policy and Partnerships Manager Nathalie Maréchal explained why she’s not convinced. Alongside the numerous privacy-protection pitfalls with third-party recommender systems, this solution doesn’t address the core issue at hand: the surveillance capitalism business model. Read the transcript at Tech Policy Press.

MIT Tech Review: RDR Projects Director Ellery Biddle spoke with the Tech Review’s Karen Hao about the viability of Facebook whistleblower Frances Haugen’s proposal to regulate algorithms by creating a carve-out in Section 230 of the Communications Decency Act. In short, she says we’ll need a lot more transparency around algorithms before we can look to solutions like this one. Read via MIT Tech Review.

The Logic: The Government of Canada’s proposed online harms bill is “unworkable,” according to RDR’s Maréchal. She offered key points from RDR’s comments on the bill, in an interview with The Logic, a Canadian publication covering the innovation economy. Read via The Logic (paywalled).

National Journal: Maréchal also spoke with the National Journal to push back on Rep. Pallone’s proposed bill to reform Section 230, saying that the bill “falls into the same trap of all the other well-intentioned 230 bills.” Pointing to the experience of sex workers in the wake of SESTA/FOSTA carve-outs, Maréchal asserted that the carve-outs often lead to companies erring on the side of mass removals of content posted by users, forcing marginalized individuals off the internet. Read via National Journal.

Events

UCLA Institute for Technology, Law & Policy | Power and Accountability in Tech
November 5 at 4:00 PM ET | Register here

RDR Director Jessica Dheere joins UCLA’s week-long conference examining corporate power, multi-stakeholder engagement, and solutions to uphold human rights. Jessica will speak on a panel alongside Nandini Jammi, co-founder of Check My Ads; Lilly Irani, associate professor of Communication and Science Studies at UC San Diego; and Isedua Oribhabor, business and human rights lead at Access Now.

UCLA Institute for Technology, Law & Policy | Transparency and Corporate Social Responsibility
November 17 at 3:00 PM ET | Register here

RDR Senior Policy and Partnerships Manager Nathalie Maréchal will join UCLA professor Lynn M. LoPucki and SASB Standards Associate Director of Research Greg Waters to discuss the importance of transparency for accountable corporate governance in the tech sector.

A global group of investors with more than $6 trillion in assets have sent letters calling on the 26 tech and telecom companies we ranked in the last RDR Corporate Accountability Index to commit to our core recommendations. We push companies to:

  • implement robust human rights governance
  • maximize transparency on how policies are implemented
  • give users meaningful control over their data and data inferred about them
  • account for harms that stem from algorithms and targeted advertising

Coordinated by the Investor Alliance for Human Rights, the campaign comprises nearly 80 investment groups who are applying pressure on technology companies to resolve these long-standing, systemic issues. The significant increase in support for this statement relative to previous years signals an increased desire among investors for good corporate governance and respect for human rights within the tech sector. The investor groups urged companies to implement key corporate governance measures that we at RDR have long pushed for, including strengthened oversight structures to prevent companies from causing or enabling human rights violations.

Ranking Digital Rights is proud to support the Investor Statement on Corporate Accountability for Digital Rights and investors’ direct engagement with some of the largest ICT companies in the world. Through our annual Corporate Accountability Index, we equip investors and advocates alike with the data and analysis they need to draft and promote shareholder resolutions that put human rights first.

Read the Investor Statement

We invite investors and asset managers seeking guidance on the human rights risks of technology companies to reach out to us at investors@rankingdigitalrights.org.

London street art. Photo by Annie Spratt. Licensed for non-commercial reuse by Unsplash.

Written and compiled by Alex Rochefort, Zak Rogoff, and RDR staff.

The revelations of Facebook whistleblower Frances Haugen, published in SEC filings and in the Wall Street Journal’s “Facebook Files” series, have brought forth irrefutable evidence that Facebook has repeatedly misled or lied to the public, and that it routinely breaks its own rules, especially outside the U.S.

Corroborating years of accusations and grievances from global civil society, the revelations beg the question: What do Facebook’s policies really tell us about how the platform works? The documents offer us a rare opportunity to cross-check the company’s public commitments against its actual practices—and our own research findings of the past six years.

 

How does Facebook handle hate speech?

What Facebook says publicly:

In 2020, Facebook claimed that it proactively removes 95% of posts that its systems identify as hate speech. The remaining 5% are flagged by users and removed on review by moderators.

What the Facebook files prove:

Facebook estimates that it takes action on “as little as 3-5% of hate speech” because of limitations in its automated and human content moderation practices. The company does not remove “95% of hate speech” that violates its policies.

What we know:

While not technically contradictory, these statements are emblematic of a longstanding strategy by Facebook to obfuscate and omit information in transparency reports and other public statements. These statements reinforce what we’ve found in our research. While we have noted that Facebook’s policies clearly outline what content is prohibited and how it enforces its rules, the company does not publish data to corroborate this. Without this data, it is impossible for researchers to verify that the company does what it says it will do.

Our most recent Facebook company report card highlights the company’s failure to be fully transparent about its content moderation practices. Carrying out content moderation at scale is a complex challenge. But providing more transparency about content moderation practices is not. See our 2020 data on transparency reporting.

 

How does Facebook handle policy enforcement when it comes to human rights violations around the world?

What Facebook says publicly: The company says it takes seriously its role as a communication service for the global community. In a 2020 Senate hearing CEO Mark Zuckerberg noted that the company’s products “enabled more than 3 billion people around the world to share ideas, offer support, and discuss important issues” and reaffirmed a commitment to keeping users safe.

What the Facebook files prove: Facebook allocates 87% of its budget for combating misinformation to issues and users based in the U.S., even though these users make up just about 10% of the platform’s daily active users. These policy choices have exacerbated the spread of hate speech and misinformation in non-Western countries, undermined efforts to moderate content in regions where internal conflict and political instability are high, and contributed to the spread of offline harm and ethnic violence.

What we know: The Haugen revelations corroborate what civil society and human rights activists have been calling attention to for years—Facebook is insufficiently committed to protecting its non-Western users. Across the Global South, the company has been unable—or unwilling—to adequately assess human rights risks or take appropriate actions to protect users from harm. This is especially concerning in countries where Facebook has a de facto monopoly on communications services thanks to its zero-rating practices.

In our 2020 research, Facebook was weak on human rights due diligence, and failed to show clear evidence that it conducts systematic impact assessments of its algorithmic systems, ad targeting practices, or processes for enforcing its Community Standards. The company often points to its extensive Community Standards as evidence that it takes seriously its responsibility to protect people from harm. But we now have proof that these standards are selectively enforced, in ways that reinforce existing structures of power, privilege, and oppression. See our 2020 data on human rights impact assessments for algorithmic systems and zero-rating.

 

How does Facebook handle policy enforcement for high-profile politicians and celebrities?

What Facebook says publicly: Facebook has wavered on the question of whether and how to treat speech coming from high-profile public figures, citing exceptions to its typical content rules on the basis of “newsworthiness.” But in June 2021, the company said that it had reined in these exceptions at the recommendation of the Facebook Oversight Board. A blog post about the shift asserted: “we do not presume that any person’s speech is inherently newsworthy, including by politicians.”

What the Facebook files prove: Facebook maintains a special program, known as XCheck (or “cross-check”) that exempts high-profile users, such as politicians and celebrities, from the platform’s content rules. A confidential internal review of the program stated the following: “We are not actually doing what we say we do publicly….Unlike the rest of our community, these people can violate our standards without any consequences.”

What we know: We know that speech coming from high-profile people, especially heads of state, can have a significant impact on what people believe is true or false, and what they feel comfortable saying online. Facebook maintains an increasingly detailed set of Community Standards describing what kinds of content is and is not allowed on its platform, but as our data over the years has shown, the company has long failed to show evidence (like transparency reports) proving that it actually enforces these rules. What are the human rights consequences of creating a two-tiered system like XCheck? Our governance data also shows that Facebook’s human rights due diligence processes hardly scratch the surface of this question.

 

Does Facebook prioritize growth over democracy and the public interest?

What Facebook says publicly: In a 2020 Facebook post, Mark Zuckerberg announced several Facebook policy changes meant to safeguard the platform against threats to the U.S. election, including a ban on political and issue ads, steps to reduce misinformation from going viral, and “strengthened enforcement against militias, conspiracy networks like QAnon, and other groups that could be used to organize violence or civil unrest…”

What the Facebook files prove: These measures stayed in place during the election, but were quickly dissolved after the election because they undermined “virality and growth on its platforms.” Other interventions that might have reduced the spread of violent or conspiracist content around the 2020 U.S. election were rejected by Facebook executives out of fear they would reduce user engagement metrics. Facebook whistleblower Haugen says the company routinely chooses platform growth over safety.

What we know: We know that Facebook’s systems for moderating both organic and ad content, as well as ad targeting, have a tremendous impact on what information people see in their feeds, and what they consequently believe is true. This means that Facebook plays a role in influencing people’s decisions about who to vote for. The company has failed to publish sufficient information about how it moderates these types of content. And while it has published some policies and statements on these processes, Haugen and others have proven that these statements are not always true. See our 2020 data on algorithmic transparency and rule enforcement related to advertising, ad targeting, and organic content.

 

Does Facebook knowingly profit from disinformation?

What Facebook says publicly: In a 2021 House hearing, Mark Zuckerberg deflected the suggestion from Congressman Bill Johnson, a Republican from Ohio, that Facebook has profited from the spread of disinformation.

What the Facebook files prove: Facebook profits from all of the content on its platform. Its algorithmically-fueled, ad-driven business model requires that users stay active on the platform in order to make money from ads.

What we know: As we’ve said before, the company has never been sufficiently transparent about how it builds or uses algorithms.

Automated tools are essential to social media platforms’ content distribution and filtering systems. They are also integral to platforms’ surveillance-based business practices. Yet Facebook (and its competitors) publish very little about how its algorithms and ad targeting systems are designed or governed — our 2020 research showed just how opaque this space really is. Unchecked algorithmic content moderation and ad targeting processes raise significant privacy, freedom of expression, and due process concerns. Without greater transparency around these systems, we cannot hold Facebook accountable to the public. See our 2020 data on human rights impact assessments for targeted advertising and algorithmic systems.

Facebook’s business model lies at the heart of the company’s many failures. Despite the range of harms it brings to people’s lives and rights, Facebook has continued its relentless pursuit of growth. Growth drives advertising, and ad sales account for 98 percent of the company’s revenue. Unless we address these structural dynamics — starting with comprehensive federal privacy legislation in the U.S. — we’ll be treating these symptoms forever, rather than eradicating the disease.

RDR has contributed to the public consultation on the Canadian government’s proposed legislative and regulatory framework to address harmful content online. The framework sets out entities that would be subject to the new framework, what types of content would be regulated, new rules and obligations for regulated entities, and two new regulatory bodies and an advisory body that would oversee the new framework. We believe that efforts to address these harms must promote and uplift freedom of expression and information as well as our fundamental right to privacy. We commend the Canadian government’s objective to create a safe and open internet and have a few recommendations on how the government can tackle the underlying causes of online harms. Read the introduction of our submission below or download it in its entirety here.

Honorable members of the Department of Canadian Heritage:

Ranking Digital Rights (RDR) welcomes this opportunity for public consultation on the Canadian government’s proposed approach to regulating social media and combating harmful content online. We work to promote freedom of expression and privacy on the internet by researching and analyzing how global information and communication companies’ business activities meet, or fail to meet, international human rights standards (see www.rankingdigitalrights.org for more details). We focus on these two rights because they enable and facilitate the enjoyment of the full range of human rights comprising the Universal Declaration of Human Rights (UDHR), especially in the context of the internet.

RDR broadly supports efforts to combat human rights harms that are associated with digital platforms and their products, including the censorship of user speech, incitement to violence, campaigns to undermine free and fair elections, privacy-infringing surveillance activities, and discriminatory advertising practices. But efforts to address these harms need not undermine freedom of expression and information or privacy. We have long advocated for the creation of legislation to make online communication services (OCSs) more accountable and transparent in their content moderation practices and for comprehensive, strictly enforced privacy and data protection legislation.

We commend the Canadian government’s objective to create a “safe, inclusive, and open” internet. The harms associated with the operation of online social media platforms are varied, and Canada’s leadership in this domain can help advance global conversations about how best to promote international human rights and protect users from harm. As drafted, however, the proposed approach fails to meet its stated goals and raises a set of issues that jeopardize freedom of expression and user privacy online. We also note that the framework contradicts commitments Canada has made to the Freedom Online Coalition (FOC) and Global Conference for Media Freedom, as well as previous work initiating the U.N. Human Rights Council’s first resolution on internet freedom in 2012. As Canada prepares to assume the chairmanship of the FOC next year, it is especially important for its government to lead by example. Online freedom begins at home. As RDR’s founder Rebecca MacKinnon emphasized in her 2013 FOC keynote speech in Tunis, “We are not going to have a free and open global Internet if citizens of democracies continue to allow their governments to get away with pervasive surveillance that lacks sufficient transparency and public accountability.”

Like many other well-intentioned policy solutions, the government’s proposal falls into the trap of focusing exclusively on the moderation of user-generated content while ignoring the economic factors that drive platform design and corporate decision-making: the targeted-advertising business model. In other words, restricting specific types of problematic content overlooks the forest for the trees. Regulations that focus on structural factors—i.e., industry advertising practices, user surveillance, and the algorithmic systems that underpin these activities—are better suited to address systemic online harms and, if properly calibrated, more sensitive to human rights considerations. 

In this comment we identify five issues of concern within the proposal and a set of policy recommendations that, if addressed, can strengthen human rights protections and tackle the underlying causes of online harms.

Download our entire submission here.