RDR is now an independent initiative. Our website is catching up.  Read our announcement →

Original art by Paweł Kuczyński

As the 2020 U.S. presidential campaign continues amid a pandemic with no precedent in living memory, politicians on both sides of the aisle are understandably eager to hold major internet companies accountable for the spread of disinformation, hate speech, and other problematic content. Unfortunately, their proposals focus on pressuring companies to purge their platforms of various kinds of objectionable content, including by amending or even revoking Section 230 of the 1996 Communications Decency Act, and do nothing to address the underlying cause of dysfunction: the surveillance capitalism business model.

Today we’re publishing a new report, “It’s Not Just the Content, It’s the Business Model: Democracy’s Online Speech Challenge,” that explains the connection between surveillance-based business models and the health of democracy. Written by RDR Senior Policy Analyst Nathalie Maréchal and journalist and digital rights advocate Ellery Roberts Biddle, the report argues that forcing companies to take down more content, more quickly is ineffective and would be disastrous for free speech. Instead, we should focus on the algorithms that shape users’ experiences.

In the report, we explain how algorithms determine the spread and placement of user-generated content and paid advertising, but they share the same logic: showing each user the content they are most likely to engage with, according to the algorithm’s calculations. Another type of algorithm performs content moderation: the identification and removal of content that breaks the company’s rules. But this is no silver bullet, as these tools are unable to understand context, intent, and other factors that are key to whether a post or advertisement should be taken down.

We outline why today’s technology is not capable of eliminating extremism and falsehood from the internet without stifling free expression to an unacceptable degree. While we accept that there will never be a perfect solution to these challenges, especially not at the scale at which the major tech platforms operate, we assert that if they changed the systems that decide so much of what actually happens to our speech (paid and unpaid alike) once we post it online, companies could significantly reduce the prevalence of disinformation and hateful content.

At the moment, determining exactly how to change these systems requires insight that only the platforms possess. Very little is publicly known about how these algorithmic systems work, despite their enormous influence on our society. If companies won’t disclose this information voluntarily, Congress must intervene and insist on greater transparency, as a first step toward accountability. Once regulators and the American public have a better understanding of what happens under the hood, we can have an informed debate about whether to regulate the algorithms themselves, and if so, how.

This report is the first in a two-part series and relies on more than five years of research for the RDR Corporate Accountability Index as well as the findings from a just-released RDR pilot study testing draft indicators on targeted advertising and algorithmic systems.

The second installment, to be published later this spring [now available here], will examine two other regulatory interventions that would help restructure our digital public sphere so that it bolsters democracy rather than undermines it. First, national privacy legislation would blunt the power of content-shaping and ad-targeting algorithms by limiting how personal information can be used. Second, requiring companies to conduct human rights impact assessments about all aspects of their products and services—and to be transparent about it—will help ensure that they consider the public interest, not just their bottom line.

We had to cancel our planned launch event due to the novel coronavirus, but we’ll be organizing webinars to discuss why we think #itsthebusinessmodel we should pay attention to, not just the content.

Please read the report, join the conversation on Twitter using #itsthebusinessmodel, and email us at itsthebusinessmodel@rankingdigitalrights.org with your feedback and to request  webinar for your organization.

We would like to thank Craig Newmark Philanthropies for making this report possible.

Shutterstock

Algorithms now shape nearly every facet of our digital lives. They collect and process vast amounts of user data, compiling sophisticated profiles about every user. They categorize us according to our demographics, behaviors, location data. They also make assumptions about our likes and dislikes, also known as inferred data. Then they monetize our digital dossiers for advertisers to bid on as they tick boxes to pick characteristics of the people they want to target. 

At least we think that’s what algorithms do, based on what we’ve been able to learn from research and reporting. But most of us are still in the dark about exactly how they do it.

According to new findings in a pilot study released by RDR this week, not one of eight U.S. and European companies evaluated disclosed how they develop and train their algorithmic systems. This means that every piece of promoted or recommended content, and every ad we encounter, appears on our screen as the result of a process and a set of rules no one but the company can see. These processes not only pose significant risks to privacy—particularly when companies collect data and make inferences about users without their knowledge or consent—but can also result in discriminatory outcomes if algorithmic systems are based on biased data sets. 

Funded by the Open Society Foundations’ Information Program, the study was part of RDR’s ongoing work to include questions related to targeted advertising and algorithmic systems in its methodology for the RDR Corporate Accountability Index. The companies evaluated were U.S. digital platforms Apple, Facebook, Google, Microsoft, and Twitter, and European telecom companies Deutsche Telekom, Telefónica, and Vodafone.

The pilot study evaluated the companies’ transparency about their use of targeted advertising and algorithmic systems based on a set of draft indicators developed by RDR last year. Generated from real-world human rights risk scenarios (detailed documents here and here) and grounded in international human rights frameworks, the indicators set standards for how companies should disclose policies and practices related to targeted advertising and algorithmic systems as well as how they should govern such practices and assess the risks they pose to human rights.

The results of the pilot study reveal multiple shortcomings across all companies. In addition to no disclosures on the development and training of algorithmic systems, companies did not disclose whether or how users can control how their information is used or the categories they are sorted into. While most companies disclosed some information around their targeting rules, no company disclosed any data about what actions users can take to remove ad content that violates these rules, making it impossible to hold companies accountable for their own terms of service.

In the realm of corporate governance, European telecoms led in making explicit public commitments to respect human rights as they develop and use algorithmic systems. Among U.S. companies, only Microsoft disclosed whether it conducts risk assessments on the impact on free expression and privacy of their development and use of algorithmic systems. No company in this pilot disclosed if they conduct risk assessments on their use of targeted advertising. 

Companies also showed little to no commitment to informing users about the potential human rights harms associated with algorithmic systems and targeted ads.

The pilot findings will help RDR determine which of the draft indicators to finalize and include in the updated methodology for the 2020 RDR Index. The findings also establish a baseline against which we can measure company improvements even before the next RDR Index is released. Further, the pilot findings offer a glimpse of the transparency and accountability challenges that tech companies have yet to address with regard to targeted advertising and algorithmic systems and provide an important benchmark for the road ahead.

Finally, the pilot findings also informed RDR’s new policy report, “It’s Not Just the Content, It’s the Business Model: Democracy’s Online Speech Challenge.” The first in a two-part series aimed at U.S. policymakers and anybody concerned with the question of how internet platforms should be regulated, the report is set for release tomorrow. Part two, which will focus on corporate governance of targeted advertising and algorithmic systems, will be out later this spring.

We welcome input or feedback about research presented in this study or the methodology at methodology@rankingdigitalrights.org.

Shutterstock

Hate speech. Viral disinformation campaigns. Political polarization propelled by targeted ads.

Pressure is mounting on policymakers to hold internet platforms liable for these kinds of online speech. So far, most regulatory options focus on curtailing free expression in a way that could threaten protections offered by the First Amendment and Article 19 of the Universal Declaration of Human Rights.

On March 17, RDR will publish our first major policy report, “It’s Not Just the Content, It’s the Business Model: Democracy’s Online Speech Challenge.” The report will point to ways to regulate companies while protecting freedom of expression online.

Note:  Our March 17 event,“It’s Not Just the Content, It’s the Business Model: Democracy’s Online Speech Challenge,” is cancelled due to evolving concerns around coronavirus (COVID-19) and changes in speaker availability. We apologize for any inconvenience this may cause.

Download: Ranking Digital Rights’ response to Facebook on the Oversight Board bylaws, trust, and human rights review

Shutterstock

Today, Facebook released the highly anticipated bylaws for its Oversight Board, the soon-to-launch independent body that will allow users to appeal the company’s content moderation decisions before independent panels of policy experts.

We at RDR think this experiment in internet governance shows real progress toward new models of content moderation that protect and promote human rights. The bylaws reflect improved remedy with binding results, establish commitments to disclose data, and implement some of the recommendations of a third-party human rights review commissioned by Facebook. At the same time, they reveal that much work remains to be done for these models to succeed and endure. Universal human rights principles should play a central role in the Oversight Board’s processes and structures, and its scope should extend to Facebook’s due diligence mechanisms and algorithmic oversight.

The release of the bylaws follows last month’s announcement that the Oversight Board will operate under an independent trust and the publication of a third-party human rights review of its creation and prospective operations, conducted by BSR. Today, we are publishing a full response to all three developments.

Facebook’s Oversight Board (sometimes referred to as Facebook’s ‘Supreme Court’) has been a long time coming. The company has faced a barrage of criticism in recent years for its lackluster responses to hate speech, disinformation campaigns, and attempts to incite violence through the platform, among other content issues. RDR itself has pushed hard for greater transparency around the company’s Community Standards, which govern what can and cannot be expressed on the platform. There have also been calls for greater transparency around the mechanisms controlling how some voices are amplified on Facebook while others are silenced or obscured.

The Oversight Board, first announced by CEO Mark Zuckerberg in a 2018 blog post, will seek to address some of these shortcomings by offering a binding grievance mechanism unswayed by Facebook’s influence.

In May 2019, reflecting on the Oversight Board’s draft Charter, we argued that the creation of an independent governance and appeals mechanism for content moderation is both critical and timely. The 2019 RDR Index revealed profound gaps in Facebook’s remedy and grievance mechanisms, which were among the weakest of any ranked company. RDR has strongly advocated for Facebook to incorporate the Santa Clara Principles on Transparency and Accountability in Content Moderation into its appeals processes, thus embracing a roadmap to a system of remedy grounded in human rights principles.

The Oversight Board’s newly released bylaws show signs of progress toward an appeals mechanism – a way for users to formally appeal Facebook’s decisions to remove or preserve controversial pieces of content – that may really work in practice. Targeting some of the weaknesses RDR has identified in Facebook’s existing remedy processes, they provide clear timeframes for most aspects of the Oversight Board’s operations and elaborate on the data that will be disclosed as the Oversight Board carries out its mandate. These are promising developments in the direction of transparency and accountability.

But there is significant room for improvement. First and foremost, human rights norms could play a much larger role in both the bylaws and the Charter. From the inception of Facebook’s public consultations on the Oversight Board, RDR has pressed for the Board to be anchored in universal human rights principles, which apply to companies through the UN Guiding Principles on Business and Human Rights. These norms should be a core component of the Oversight Board’s work and permeate its operations, as independent experts have argued repeatedly. We recognize the progress from the draft Charter, which makes no reference to human rights norms, to the final Charter and bylaws, where the Board’s decision-making process includes assessing the impact of content removal on the right to free expression. But this neither covers the full spectrum of human rights nor equates to accepting human rights principles as the cornerstone. Facebook has made a commitment to human rights norms through its membership in the Global Network Initiative, whose Principles are grounded in them. The company should embrace and reiterate this commitment in every new endeavor, including the Oversight Board.

The bylaws also fail to adequately acknowledge the role of algorithms in promoting and amplifying problematic speech. In our present online reality, where platforms are no longer limited to keeping content up or taking it down, the Oversight Board should also have input on other decision-making options available to Facebook, including demotion and other algorithmic changes to the visibility of content. The Oversight Board should be able to issue these and other advisory opinions without having to be prompted by Facebook.

In December, Facebook also announced the creation of an independent trust tasked with supporting the regular operations of the Oversight Board, and shared an independent human rights review of the emerging body. RDR welcomes both announcements. We have long advocated for mechanisms that would ensure the Oversight Board’s independence from Facebook. Yet the risk of bias remains, as Facebook alone is responsible for selecting and appointing the trustees as well as the initial officers of the Oversight Board.

We also commend Facebook for commissioning a human rights review – and for sharing it publicly prior to the launch of the Oversight Board. Human rights impact assessments and similar, structured due diligence mechanisms are in short supply across the industry. This publication has the potential to change that precedent and lay a foundation for best practice. We also encourage Facebook to imbue the Oversight Board with the authority to provide advice on the company’s broader due diligence processes as they develop.

Facebook has accepted a great challenge in setting up the Oversight Board – it is putting forth a structure with the potential to set new norms for the governance of content moderation, not only on its own platform but across the internet. Given its dominant role in the industry as an enabler of online speech for billions of people across the globe, it is critical to get it right the first time. We welcome Facebook’s increased commitment to transparency and accountability. At the same time, the company should take note that this commitment will only take the Oversight Board so far in the absence of an explicit anchoring in universal human rights, which should underpin its design, launch, and evolution.

Wall Street sign

Rick Tap/Unsplash

At a time of regulatory and geopolitical uncertainty, investors should seek tech companies that are using human rights standards to guide their work and build trust with users. Look for companies with policies, practices, and governance that go above and beyond baseline legal compliance. 

Today we release our Winter 2020 Investor Update. Our latest special edition for investors uses RDR’s 2019 Corporate Accountability Index results to show how leading companies are handling artificial intelligence, targeted advertising, content moderation, and other burning industry issues around which regulatory consensus has yet to form. We argue that to get ahead of regulatory risks, CEOs and boards need to take responsibility for the human rights risks and negative social impacts associated with their business models.

Digital rights issues have become increasingly important to investors. The number of shareholder resolutions addressing issues covered by the RDR Index has risen over the years, from just 2 in 2015 to 12 in 2019.

See this interactive table for a list of resolutions cross-referenced to RDR Index indicators. 

A strong theme across many of the proposals that made it onto proxy ballots in 2019 is the need for more responsible and accountable governance—particularly in relation to online speech, artificial intelligence, and privacy. While these resolutions lacked enough votes to pass (with some companies’ dual class share structure making passage impossible), the sharpened focus and growing number of such resolutions points to a clear increase in investor concern about digital rights issues. Related resolutions are already being filed for 2020: the advocacy group SumofUs cited RDR data in a resolution calling on Apple to promote freedom of expression, and the corporate responsibility organization As You Sow filed a resolution calling on Facebook to address disinformation and hate speech. We anticipate that shareholders will be at least as active on these and related issues in 2020 as they were last year.

Our recommendations to investors:

  • Look for companies that go beyond legal compliance to proactive stewardship. Rather than simply looking for how well companies are preparing to comply with anticipated regulation, investors should focus on companies that demonstrate good data stewardship, and proactively work to protect users’ human rights, whether or not the law compels them to do so.
  • Look for companies that conduct comprehensive oversight and impact assessments. By examining company performance on specific RDR Index indicators, investors can gain a more granular picture of specific types of risk. For example: The 2019 RDR Index highlighted the failure of Facebook, Google, and Twitter to conduct human rights impact assessments, which left them ill-equipped to understand and mitigate the risks of these practices for users.
  • Reward companies that take responsibility for their human rights impact on issues that lack regulatory consensus, like online speech. The media is awash with headlines about online extremism, hate speech, and disinformation. Debates about appropriate regulatory responses – from increasing intermediary liability to antitrust – make it harder to predict the regulatory future for online speech than for privacy. Under such circumstances, look for efforts by companies to be accountable to users and affected communities despite the absence of clear regulation. A key first step will be for companies to be more transparent about how they formulate and enforce rules for paid as well as organic user content. Greater disclosure will contribute to a more informed policy discussion about what types of rules will be most effective.
  • Hold companies accountable for the part they play in shaping our shared future: While the spotlight on the world’s most powerful tech giants is already strong, scrutiny of how their operations affect the public interest will only intensify in a highly volatile U.S. election year. At a time like this, corporate responsibility and accountability around advertising business models and algorithmic decision-making systems becomes even more important.

For more analysis and resources, see RDR’s investor resource page. If you are an investment professional, please consider participating in our investor survey.