RDR is now an independent initiative. Our website is catching up.  Read our announcement →

Meta released its first-ever “Annual” Human Rights Report last week, looking at the company’s purported progress toward meeting its human rights obligations from 2020 through 2021. This release follows years of criticism from civil society about Meta’s failure to act in response to accusations of ignoring online abuses that have led to real-world abuse across the globe. We at Ranking Digital Rights have consistently highlighted this pressing need, including in our latest Meta Scorecard. In fact, we urged the company then known as Facebook to act all the way back in 2015 when we published our first RDR Index, and we’ve continued to do so ever since.

This report, in other words, has been a long time coming. Although we’re glad it finally arrived, we would have liked to have seen a greater acknowledgement of the existing policies and incentive structures that are stifling the company’s ability to better respond to human rights issues. We’re hoping that the briefing Meta promised as a follow-up to the report will provide us with an opportunity to continue engaging on these issues. Below we dig into the report, and look at the good, the bad, and what’s missing.

First, the good:

  • The report exists!: The fact that the heat on Meta was strong enough to compel the company to produce this report, which makes explicit references to international human rights norms and instruments, is a start. Better late than never!
  • Recognition that rights extend beyond users: Although it’s important that tech companies respect the human rights of their users, the impact of Facebook’s activities reaches far beyond them. Thankfully, this report recognizes that rights-holders include “not only users of our platforms and services, but the many others whose rights were potentially impacted by online activity and conduct.” 
  • Facebook’s Trusted Partners program: Meta discloses the use of “trusted partners,” which includes “over 400 non-governmental organizations, humanitarian agencies, human rights defenders and researchers from 113 countries around the globe.” The stated goal of this program is to help Facebook understand the impact of its policies on at-risk users. While there are good reasons to keep the full list of partner organizations confidential, the company should provide much more information on how this program actually works. Although this is a positive development in theory, there’s far too little transparency about this program to really evaluate it. 

Now, the bad. And there’s unfortunately a good deal of that, with Meta making a whole lot of meaningless and misleading claims in this report

  • Meta starts off the report with its mission statement: “[T]o give people the power to build community and bring the world closer together.” Somehow, according to Meta, this statement is supposed to “strongly” align itself with “human rights principles.” How exactly is that? We’re not so sure. 
  • Next, Meta pays lip service to the concept of a “universal obligation to non-discrimination” as part of their “vision and strategy”: But they want to do this without recognizing that the targeted advertising business model inherently enables and automates discrimination based on demographic and behavioral data. Nor does the report grapple with the discrimination resulting from the uneven way Meta allocates resources toward content moderation in different languages. 
  • Meta, in its own words, is a “mission-driven company where employees are typically aligned with human rights norms. In turn, this consensus leads to a company-wide community that wants to protect and advance human rights.” But there’s no evidence for this claim—we’re supposed to just take the company at its word. And, once again, Meta is making this statement despite using a business model that, as we’ve been saying for years!, is grounded in the violation of the right to privacy. 
  • Ad-policy enforcement barely makes it into the report: Although the company makes over 98 percent of its money from advertising, a discussion of the effects of Meta’s ad content and systems is almost completely absent from its “human rights impact assessments.” And this despite the fact that about 80 percent of Meta shareholders voted this year for a human rights impact assessment (HRIA) of the company’s ad-targeting practices. According to the report, Meta created new AI classifier systems, which they say will allow them  “to enforce bans on violating ads and commerce listings for certain medical products.” This seems to be the only reference to ads throughout the report. (It should be noted that Meta does not release any data whatsoever on how it moderates ads, despite accounting for almost a quarter of all digital ad spending in the United States.) Are we really supposed to believe that surveillance advertising has no impact on the rights to privacy, free expression, and non-discrimination? Meta clearly wants us to think so, but we’re not buying it.
  • Is this really all the human rights due diligence Meta did in two years?: It’s not clear whether Meta has conducted human rights due diligence in countries beyond the ones mentioned (Cambodia, Indonesia, the Philippines, Sri Lanka, and India), or on product features other than end-to-end encryption and Ray-Ban Stories. If not, then why not? If they have, then why are these the only evaluations included in the report? In particular, as many other civil society organizations have pointed out, the full HRIA from India should be made public (allowing for redactions needed to protect civil society actors). We also expected to see a discussion of human rights due diligence around the so-called “metaverse” (which we didn’t).
  • Meta’s Human Rights Policy Team, which was responsible for this report, counted four full-time staff at the end of 2021. A team of only four seems far too small to be able to properly investigate the human rights policy of a company of the size and scope of Meta, even if many other roles also touch on human rights. (Contrast this number with the armies of lobbyists Meta employs around the world.)

Finally, there are a few things altogether missing that really should be there. 

  • There’s no mention whatsoever of the uneven enforcement of content policies across regions, countries, and languages. There is some mention of AI-driven content moderation, but no acknowledgment that these systems are much more advanced for some languages (like English) than others, and don’t exist at all for many others. Meta also makes a vague promise to have “improve[d] our moderation across languages by adding more expertise,” but doesn’t say anything about how this affects its ability to moderate effectively or how human rights are impacted. 
  • Content moderators: There is no mention of the labor rights of Meta’s moderators. The company has already been the subject of a lawsuit over their working conditions from an ex-moderator in Kenya
  • There is no mention of any attempts at data minimization or purpose limitation—two bedrock principles of data protection that are fundamental to the human right to privacy. This is not surprising, given Meta’s voracious appetite for data collection and insistence that its very existence is “in line with human rights principles.”

Again, we’re glad that Meta felt compelled to put out this report and recognized the need to commit to a human rights policy, something we’ve been calling for. Most large tech companies do not produce a human rights report at all. But beyond this, the report fails to actually address the causes of the online abuses that pushed civil society to demand action from Meta in the first place. Many of the issues we’ve highlighted in our past Scorecards, including insufficient attention to content moderation policies, were wholly missing. Furthermore, there isn’t much indication in this report that the company will do what’s needed to address its lack of adherence to human rights principles. But how could there be? The first step to solving a problem is admitting that there is one.

Over the past weeks, Amazon, Meta, Twitter, and Alphabet (Google) all faced a shareholder reckoning. Nearly 50 petitions launched by investors across the four tech giants called on them to come clean on an array of issues. Many of those issues were related to human rights. The topics on the table: surveillance products and their use by government agencies, the corrosive impact of targeted ads, ensuring the safety of warehouse workers, and more. Shareholders voted on all of them.

On the surface, the outcome was disheartening: only two proposals won a majority of shareholder votes, both of them at Twitter. But the raw numbers obscure a much more complex picture.

This year’s wave of investor action on human rights has proven stronger than any in the past. The volume and range of proposals has reached record levels, breaking into issues that the investor community formerly hadn’t explored. A critical mass of shareholders has backed human rights motions, even when it was clear that the artificially outsized voting power of Google and Meta’s corporate leadership would nullify their chances of winning a majority, thanks to the companies’ multi-class stock structures.

There are good reasons to expect that investor-led pressure for corporate accountability will continue to flourish. Let’s dive into them.

Shareholder meetings 101

Shareholder proposals are one of the most powerful tools in the activist investor’s toolbox. They democratize the mechanisms that govern a company’s operations by putting issues raised by investors to a vote. Civil society organizations often support proponents in crafting and amplifying their demands.

The voting process works both as a referendum on the company’s leadership and as a barometer of shareholders’ sentiment about how the company is navigating key issues. Proposals are advisory, but strong support creates a powerful incentive for boards to take action or face further backlash. Losing shareholders’ trust is ultimately a prelude to losing their capital.

Echoing the ongoing boom in ESG (environmental, social, and governance) investing, shareholders this year have hit many tech companies with more proposals than they had ever received. The  17 proposals at Google, 15 at Amazon, 12 at Meta, and five at Twitter all set an all-time company record. Support for those that tackle social and environmental issues has grown rapidly, often hitting more than 30%—a threshold commonly viewed as critical to compel executives to take action.

Most proposals fail to earn an absolute majority, but dragging an uncomfortable issue into the spotlight and keeping it there is a big deal in itself. As recent history shows, sustained pressure pays off. Two years ago, Apple bent to relentless public appeals by both investors and civil society when it published its first human rights policy. Last year, Microsoft promised an independent human rights assessment of its surveillance and law enforcement contracts, in part as a compromise to activist investors. Similar examples abound.

Twitter

[expand title=”Read Overview” swaptitle=”Close”]

Twitter’s annual meeting took place amid continued uncertainty surrounding the sale of the company to Elon Musk, which CEO Parag Agrawal announced would not be discussed at the event. Shareholders scored victories with two proposals demanding more transparency on its use of concealment clauses (such as non-disparagement agreements) and on electoral spending. Both of them won a majority. Another proposal called for the board to appoint a member with human or civil rights expertise. Much like last year, it gained the support of about 15% of shareholders. Like at Meta, a call for a civil rights audit filed by an “anti-woke” conservative group was soundly defeated.

Read more about Twitter’s transparency on human rights issues in the Ranking Digital Rights Big Tech Scorecard.

[/expand]

[table id=twitter /]

A cascade of wake-up calls

Seasoned investors have made it clear this year that they are no strangers to the nuances of the human rights issues that affect their holdings. Algorithms, ad targeting, unbridled data collection, and deals involving government actors with a penchant for repression were all up for a vote at tech companies this year.

At Meta, excluding Mark Zuckerberg’s votes, about 80% of shareholders voted for a human rights impact assessment (HRIA) of the company’s ad targeting system. The proposal, which the organization I work for directly supported, underscored that Meta has never revealed any data on the ads it restricts and never offered more than a cursory remark on the topic in any of its previous HRIAs.

The proposal ultimately secured the second highest number of votes of the 12 that were on the table this year—one of the strongest shows of support for a shareholder proposal in the company’s history.

Why does this matter? Because it signals that, in the eyes of investors, the balance between maximizing profits and protecting users’ rights is shifting in favor of the latter. Increasingly, shareholders are not just asking for a reckoning with the impact of specific business decisions, but with the entire architecture of Big Tech. As the proposal’s authors put it, Meta is “nibbling around the edges of a problem instead of looking at the root cause–the overarching systems that govern targeted ads.” In other words, it’s the business model.

Meta

[expand title=”Read Overview” swaptitle=”Close”]

Meta faced 12 proposals, most of them focusing on the impact of Meta’s business model and platform governance. Shareholders called for the company to assess the effectiveness of its content policies in stemming harmful speech, evaluate the impact of expanding encryption on children’s rights, and deploy a human rights impact assessment (HRIA) alongside the development of the “metaverse” project. A proposal calling for a HRIA of Meta’s targeted ad business model won the backing of more than three-quarters of all “independent” (one-vote-per-share) stockholders—one of the strongest results recorded at Meta to date. None of the proposals reached 50% support, largely due to Mark Zuckerberg’s augmented voting power, which allows him to veto all of them every year. Nearly all independent shareholders voted to abandon this structure, but were overruled by Zuckerberg.

Read more about Meta’s transparency on human rights issues in the Ranking Digital Rights Big Tech Scorecard.

[/expand]

[table id=metavotes /]

Companies’ expansion into new digital and physical spaces also came under fire. Shareholders challenged Meta’s vision of the future, calling for a human rights review of its plans for  the “metaverse.” A week later, Google’s investors slammed its plan to open cloud regions in human rights hotspots like Saudi Arabia. The company has shown no evidence of the due diligence it conducted in light of the country’s appalling human rights record, which includes brutalizing activists and operating extensive digital surveillance networks.

Neither proposal came close to reaching the 50% support threshold—an unachievable feat, thanks to the company’s multi-class stock structure. But the message was clear: wherever major business decisions have shown their capacity to cause harm, there will be an investor rallying allies to push for accountability.

No escaping civil rights accountability

Three of the shareholder meetings took place on the anniversary of George Floyd’s murder. All of them took place in the wake of the racist massacre at a Buffalo supermarket, which the perpetrator livestreamed on the Amazon-owned Twitch. The mass murder was yet another horrendous touchpoint in a history of systemic violence that technology has often aggravated.

Corporate boards are facing a surge of investor-led rebukes on their lackluster civil rights efforts. Demand for change has skyrocketed since 2020, when Facebook released a damning third-party assessment dissecting how the company’s failure to rein in noxious posts and ads resulted in “significant setbacks for civil rights.”

Amazon

[expand title=”Read Overview” swaptitle=”Close”]

Amazon was hit with 15 proposals, including multiple on labor rights. In a historic first for the company, a warehouse worker (“picker”) filed and presented a proposal for Amazon to investigate warehouse working conditions, winning 38% support. Another worker-related proposal, which won more than a third of the vote, demanded a report on Amazon’s efforts to protect freedom of association amid a rise in unionization efforts. Two proposals asked for a report on the human rights impact of Amazon products and technologies by government agencies worldwide, highlighting the repressive uses of Rekognition (Amazon’s facial recognition system) and Amazon’s cloud services in particular. Both of them won the backing of 35% of shareholders. None of the proposals received majority support, but several strong results will be difficult for the e-commerce giant to ignore.

Read more about Amazon’s transparency on human rights issues in the Ranking Digital Rights Big Tech Scorecard

[/expand]

[table id=amazonvotes /]

 

Calls for comprehensive civil rights audits have swept through tech companies, mirroring broader trends. Earlier this year, a group of shareholders celebrated a victorious proposal at Apple demanding a third-party assessment of the company’s impact on civil rights. Amazon announced an audit of its own in April, conceding to a campaign by a group of New York pensions funds that had gained strong momentum.

Shareholders also rallied around a call for a racial equity audit at Google, clinching the fourth strongest result of all the proposals the company faced this year. This petition too grew out of investors’ apprehensions regarding Google’s business model, which has made it the most dominant advertising force on the internet. Journalists had previously revealed that Google’s targeting platform gave white supremacist content a pass while blocking terms related to social and racial justice.

Every share gets one vote (except when it doesn’t)

Unprecedented shareholder pressure has moved companies to try to insulate themselves from it. Case in point: last year, a record 56 tech companies—nearly half of all tech IPOs—went public with structures that granted founders and insiders inflated voting power over ordinary shareholders.

Alphabet (Google)

[expand title=”Read Overview” swaptitle=”Close”]

Google’s parent company faced 17 shareholder proposals. Investors called on the company to carry out a racial equity audit, assess the human rights impacts of opening new cloud regions in states with poor human rights records, and publish new disclosures on Google’s use of algorithms as well how it collects and processes user data. For the tenth consecutive year, shareholders voted on a proposal to abolish Alphabet’s multi-class share structure (see why these are a problem). It won the strongest support of any proposal in the company’s history. None of the motions achieved the 50% threshold, but nearly half of them would have were it not for Alphabet insiders’ inflated voting power.

Read more about Google’s transparency on human rights issues in the Ranking Digital Rights Big Tech Scorecard.

[/expand]

[table id=googlevotes /]

 

Now endemic in the tech sector, stock structures with two or more classes (known as dual- or multi-class stock) exist under the premise that “visionary” founders and their allies should have free rein to maximize growth and innovation. In most cases, this gives a small superclass of corporate elites 10 or more times the voting power of regular investors, who generally get one vote per share. This means they can minimize their personal investment in their own company while maximizing their clout. At Snap, shareholders receive no voting rights at all—a deeply undemocratic formula for perpetual corporate power.

When they debuted on the stock market, Meta and Alphabet both baked these structures into their business models. Officially, of the 155 proposals the two companies have jointly received since they went public, shareholders have never approved a single one. In reality, seven out of the 12 proposals shareholders voted on at Meta this year would have won a majority had Mark Zuckerberg not personally blocked them with a single vote. (Amazon and Twitter give each share one vote.)

On May 25, a record 92% of shareholders who were not Zuckerberg voted to terminate Meta’s warped voting structure. At Google, which has a separate class with no voting power, the same proposal won more support this year than any other in the company’s history. Yet in both cases, the very existence of multi-class structures guaranteed the proposals would never secure a majority.

Investors, activists, and academics oppose multi-class share structures almost unanimously. The normative arguments are clear. Outsized voting rights transform an ostensibly democratic process into one that is rigged by design. They entrench unaccountable management while disenfranchising ordinary shareholders. They offload the risks of irresponsible decisions on shareholders. Because  retirement funds are almost certain to include a who’s who of tech companies, the public ultimately pays the price.

But distorted power structures are not inevitable. In the US, the SEC and Congress both have avenues to curb the use of dual-class shares or ban them entirely. And to keep corporate power from spinning out of control, that’s exactly what they should do. In pursuit of this goal, a coalition of human rights organizations led by Ranking Digital Rights has recently sent a letter to the SEC demanding that it put an end to multi-class shares and other structural barriers to shareholder action on human rights.

The spark is lit

A casual observer might look at the success rate of this year’s shareholder proposals and see a string of campaigns that the corporate boards of American tech giants have successfully deflected.

But investors’ willingness to take action on human rights is on the upswing. So is their appetite for partnering with civil society. The groundswell of support for collective human rights statements by investors with trillions of dollars in assets reflects this well.

Shareholder advocacy is no longer an elite domain. Investors and human rights advocates can and must mutually reinforce the specialized power they each possess to trigger good change. If we want Big Tech companies to use their enormous power to support human rights and democracy, or even avoid undermining them, we have to cultivate more open exchanges between these two groups. It’s one of the most promising paths forward.

 

Today Ranking Digital Rights sent a letter to the U.S. Securities and Exchange Commission (SEC) urging the agency to ban multi-class share structures and to repeal SEC rules that limit the ability of investors to file and resubmit shareholder resolutions. These two practices undermine investors’ ability to address corporate wrongdoing and shift an unacceptable amount of risk onto the public.

Our letter calls on the SEC to:

End multi-class share structures: Unequal voting structures disenfranchise shareholders, hitting those who call for action on human rights especially hard. The SEC should eliminate these structures entirely, prioritizing companies under “bad actor disqualification” provisions, then newly listed companies, and finally existing companies. It should also require companies to disclose how their stock structures impact corporate governance.

Repeal SEC rules that hinder shareholder action. The 2020 rule changes disproportionately target small stockholders and bury important proposals. The SEC must rescind its rules that restrict participation according to stock ownership (which marginalize small shareholders), that raise the thresholds of support needed for shareholders to resubmit proposals, and that limit shareholders’ ability to build coalitions.

In recent years, the SEC has overseen strong growth in the number of companies that deliberately dilute (or eliminate, in some cases) shareholders’ voting rights when going public. Rather than adhering to the standard of issuing one vote per share, companies are opting to institute  dual- or multi-class share structures, in which a special type of share—that only company insiders can own—is worth 10, 20, or even 50 votes. The purpose of these structures is to ensure that control of the company remains with insiders, even in the event of a shareholder vote.

Multi-class share structures can entrench irresponsible management, kill leadership’s incentives to talk to (and answer) the public, and crush investor votes for change. Five of the companies ranked in RDR’s 2022 Big Tech Scorecard are structured this way: Google’s parent company Alphabet, Meta, Yandex, Baidu, and VK.

The agency also has maintained a set of rules adopted in late 2020 that limit—and sometimes impede altogether—shareholders’ ability to participate in actions that determine the future of companies in which they invest. Facing little regulatory pushback, company executives have thus imbued themselves with power and immunity from standard corporate accountability mechanisms at  previously unseen rates.

Alongside Ranking Digital Rights, signatories to the letter includes more than 20 human and civil rights organizations:

  • Access Now
  • Accountable Tech
  • American Federation of Teachers
  • Campaign for Accountability
  • Center for Digital Democracy
  • Coalition For Women In Journalism
  • Fair Vote,Fight for the Future
  • Foundation The London Story
  • Defend Democracy
  • Media Alliance
  • Mnemonic
  • OpenMedia
  • Open Technology Institute
  • Real Facebook Oversight Board
  • The Signals Network
  • Open MIC (Open Media and Information Companies Initiative)
  • Public Citizen
  • SumOfUs
  • Taraaz
  • United Church of Christ Media Justice Ministry

Our organizations’ missions center on protecting human and civil rights in the digital age, hence our emphasis on the tech sector, but we are concerned about barriers to shareholder advocacy and good corporate governance in all sectors of the economy.

Read the letter for more details on these demands.

 

 

Panel Discussion: Charting the Future of Big Tech Accountability

Big Tech accountability has come a long way since Ranking Digital Rights’ inaugural report in 2015. More than ever, the companies we rank make explicit commitments to human rights, disclose how they handle government demands, and clearly describe their security measures. But the 2022 Big Tech Scorecard shows that there’s still a long way to go—and a lot we don’t know.

Companies aren’t telling us enough about how they develop and deploy algorithms. We don’t know enough about how they enforce their rules on targeted advertising. Most share almost nothing about their protocols for disclosing data breaches. And there are many other indicators that we monitor.

Meanwhile, recent whistleblower disclosures have helped fill the gaps and affirmed what we and other civil society groups have long argued: despite the best efforts of many working-level employees, Big Tech executives refuse to do what is necessary to protect people and societies from the harmful impact of their products and services.

With new legislation looming in Europe and the U.S., a boom in ESG shareholder resolutions targeting human rights harms, and a public that’s tired of being tracked, the next chapter of Big Tech accountability is unfolding fast.

 

Speakers:

Jessica Dheere, @jessdheere

Director, Ranking Digital Rights

 

Sarah Couturier-Tanoh, @share_ca

Corporate Engagement & Advocacy Manager, SHARE

 

Jesse Lehrich, @JesseLehrich

Co-Founder, Accountable Tech

 

Chris Lewis, @ChrisJ_Lewis

President & CEO, Public Knowledge

Katarzyna Szymielewicz, @szymielewicz

President, Panoptykon Foundation

 

Sophie Zhang, @szhang_ds

Facebook whistleblower

 

Moderator:

Nathalie Maréchal, @MarechalPhD

Policy Director, Ranking Digital Rights

 

 

More About the Panelists

Sarah Couturier-Tanoh, Corporate Engagement & Advocacy Manager, SHARE

Sarah Couturier-Tanoh is an expert in corporate research and shareholder engagement. She leads dialogues with Canadian and International companies to advance ESG issues, including human rights, decent work, and corporate lobbying. Sarah has also published several issue briefs on current shareholder and policy topics, using her insight from her background in non-financial auditing.

Before joining SHARE in 2019, at Université Laval, Sarah researched transparency in the extractive industry and climate change-related disclosure.

Sarah holds a master’s in Environmental Law from Université Laval, a master’s in Sustainable Development and Corporate Social Responsibility from University Paris-Dauphine, and a master’s in Comparative Public Law from University Pantheon-Assas, France.

Twitter: @share_ca

Jesse Lehrich, Co-Founder and Senior Advisor, Accountable Tech

Jesse Lehrich is a co-founder of Accountable Tech. He has a decade of experience in political communications and issue advocacy, including serving as the foreign policy spokesman for Hillary Clinton’s 2016 presidential campaign, where he was part of the team managing the response to Russia’s information warfare operation.

Twiter: @JesseLehrich

Christopher Lewis, President & CEO, Public Knowledge

Christopher Lewis is President and CEO at Public Knowledge. Prior to becoming President and CEO, Chris served as Vice President of the organization from 2012 to 2019, leading the organization’s day-to-day advocacy and political strategy on Capitol Hill and at government agencies. During that time he also served as a local elected official, serving two terms on the Alexandria City Public School Board. Chris serves on the Board of Directors for the Institute for Local Self Reliance and represents Public Knowledge on the Board of the Broadband Internet Technical Advisory Group (BITAG).

Chris also brings experience working in the Federal Communications Commission Office of Legislative Affairs, including as its Deputy Director. He has over 18 years of political organizing and advocacy experience, including serving as Virginia State Director at GenerationEngage, and working as the North Carolina Field Director for Barack Obama’s 2008 Presidential Campaign. Chris graduated from Harvard University with a Bachelor’s degree in Government.

Twitter: @ChrisJ_Lewis

Portrait of Polish lawyer and activist Katarzyna Szymielewicz, by Lech Zych, CC BY-SA 4.0

 

Katarzyna Szymielewicz, President, Panoptykon Foundation

Katarzyna Szymielewicz is an expert in human rights and technology, lawyer, and activist. She’s a co-founder and president of the Panoptykon Foundation, a Polish NGO defending human rights in surveillance society. From 2012 to 2019, Katarzyna was vice-president of European Digital Rights (EDRi), and she has been an Ashoka Fellow since 2015.

Katarzyna has contributed to the public debate in Europe on emerging issues such as algorithmic accountability, explainability of AI-supported decisions, micro-targeting based on inferred data, and the societal costs related to commercial surveillance.

Twitter: @szymielewicz

Photo of Sophie Zhang and her cat Shadow by Lisa Danz

Sophie Zhang, Facebook Whistleblower

Sophie Zhang became a whistleblower after spending 2 years and 8 months at Facebook. During that time, she failed in efforts to fix the company from within. She personally caught two national governments using Facebook to manipulate their own citizenry, while also revealing concerning decisions made by Facebook regarding inauthenticity in Indian and U.S. politics.

Formerly a data scientist, Sophie currently stays home to pet her cats.

Twitter: @szhang_ds

Ranking Digital Rights

Jessica Dheere, Director, Ranking Digital Rights

Jessica Dheere is director of Ranking Digital Rights, an independent research program at the think tank New America that evaluates the world’s most powerful tech and telecom companies on their public commitments to protect users’ free expression, privacy, and other rights. She co-authored RDR’s spring 2020 report “Getting to the Source of Infodemics: It’s the Business Model.” A 2018-19 fellow at the Berkman Klein Center for Internet & Society at Harvard University, she is also founder, former executive director, and board member of the Beirut-based Arab digital rights organization SMEX, where she launched the CYRILLA Collaborative, a catalog of global digital rights law and case law. She was an inaugural member of the Freedom Online Coalition’s Advisory Network and has presented at the Internet Governance Forum, the Milton Wolf Seminar on Media and Diplomacy, RightsCon, and the International Journalism Festival, among other international internet policy events. She is a graduate of Princeton University and the New School.

Twitter: @jessdheere

 

Nathalie Maréchal, Policy Director, Ranking Digital Rights

Nathalie Maréchal is an internationally recognized expert on digital rights, corporate governance, and corporate accountability. In 2020, Nathalie was the lead author of RDR’s “It’s the Business Model” report series, which builds on her 2018 Motherboard op-ed, “Targeted Advertising is Ruining the Internet and Breaking the World.” The series argues that disinformation, hate speech, and other “information harms” linked to social media platforms are rooted in the surveillance capitalism business model. Nathalie has testified in front of the US House of Representatives and the US International Trade Commission. She holds a PhD in communication from the Annenberg School at the University of Southern California, and lives in Washington, DC.
Twitter: @MarechalPhD

 

A roll of receipts on a blue background

Yesterday we released the 2022 Big Tech Scorecard, our annual evaluation of how transparent the world’s most powerful digital platforms are about their policies and practices affecting human rights, particularly the rights to freedom of expression and privacy. In a time of upheaval and change, RDR is closely watching how committed the world’s Big Tech companies are to human rights, good governance, and transparency. In other words, we’re keeping receipts.

For the sixth year in a row, none of the 14 digital platforms we evaluated earned a passing grade. Yes, the scores for most companies—and the average of all of them—did tick up slightly this year, but we had hoped for more.

Unfortunately, when it comes to aligning their policies and practices with human rights standards and their obligations under the UN Guiding Principles for Business and Human Rights, our data shows that companies are content to conduct business as usual when the state of the world demands anything but.

If there’s one recommendation we have for every company we rank, it’s to accelerate their efforts to develop and implement rights-respecting policies and practices across their operations. We suggest that they use our human rights–based standards as an easy-to-follow roadmap.

Browse the 2022 Big Tech Scorecard. Here’s what you can expect:

  • Key findings that provide insight into the data and note year-over-year progress and decline, emerging patterns and longtime trends, problem spots, and opportunities for change.
  • Individual company scorecards, our most popular feature, which highlight each company’s scores in the context of recent developments and dive deep into company performance in our governance, freedom of expression, and privacy categories.
Alibaba Kakao Twitter
Amazon Meta VK
Apple Microsoft Yahoo
Baidu Samsung Yandex
Google Tencent

 

  • New ways to explore our data by comparing company and service scores; looking at scores on groups of indicators from across our categories, what we’re calling Lenses (in beta); and tracking performance over time.
  • TL;DR? Check out the executive summary. It hits all the high notes and includes our policy recommendations.

Companion Essays

In addition to our data and analysis, each year we publish companion essays, authored by members of our team, that interpret our findings through the lens of pressing public issues. Here’s what we’re thinking about now:

In recent years, nearly all of the digital platforms that RDR evaluates have agreed to speak with us about our standards and our assessments of their policies and practices. But three Chinese companies we evaluate have left us hanging. Jie Zhang explores the reasons that Alibaba, Baidu, and Tencent respond with silence when we come knocking.

Why Won’t Chinese Companies Talk to Us? It’s Complicated.

 

For a global internet that supports and sustains human rights, we need a global online advertising ecosystem that does the same thing, Policy Director Nathalie Maréchal argues. She not only offers a prescription for fixing online advertising but also makes a case for how this could help solve some of the problems with unpaid online content.

We Can’t Govern the Internet without Governing Online Advertising. Here’s How to Do It.

 

Jan Ryzdak, our company and investor engagement manager, explains how a combination of unfair and lax regulations at the Securities and Exchange Commission has tipped the balance of power against ordinary shareholders in recent years. This has allowed companies like Meta and Alphabet/Google to suppress shareholder participation. At Meta, for instance, shareholders have proposed scrapping the dual-class structure every year since 2014. Without Mark Zuckerberg’s votes, this resolution would have netted 90% support in 2021.

It’s Time to Bring Down the Barriers Blocking Shareholders on Human Rights

 


Where to find us

Ranking Digital Rights | Charting the Future of Big Tech Accountability 
May 4 at 10:30 AM EDT

After RDR Director Jessica Dheere kicks things off with highlights from the Big Tech Scorecard, Policy Director Nathalie Maréchal will moderate a panel of platform policy and advocacy superstars. Together they will explore the road ahead for corporate accountability, including major regulatory developments, the growing role of shareholder advocacy, and new strategies for holding Big Tech accountable for our human rights.

 

Register Here

 

Investor Alliance for Human Rights | Big Tech Scorecard: Data-Driven Investor Engagement with Tech Companies
May 5 at 11:00 AM EDT

Join IAHR, RDR, and expert speakers to discuss the findings from the 2022 Big Tech Scorecard and their relevance for ESG investing.

 

Register Here

 


RDR media hits

Tech Policy Press: Justin Hendrix covered the 2022 Big Tech Scorecard’s release, stating: “Ironically, Ranking Digital Rights finds Twitter in the top position largely due to its showing in the “freedom of expression” category ‘which focuses on the kinds of actions companies take to moderate and curate content, suspend and remove accounts, and respond to government and other third-party demands’…Twitter was purchased Monday by billionaire Elon Musk, who has faulted the company for its policies on speech and content moderation.”

 

Read More at Tech Policy Press

 

France 24 Español: RDR’s Leandro Ucciferri, Global Partnerships Manager, appeared on France 24 to talk about Elon Musk’s vision of free speech in relation to his takeover bid for Twitter. “There’s a genuine concern that Twitter would backtrack years of progress and turn into a space prone to further hate speech, spam, and harassment.”

 

View on YouTube

 

Tech Policy Press: Policy Director Nathalie Maréchal was interviewed on Tech Policy Press’s Sunday Show Podcast, along with Matthew Crain, author of Profit over Privacy: How Surveillance Advertising Conquered the Internet. Their discussion focused on the history of the internet economy and surveillance advertising as well as policy options in the US to address privacy and big tech regulation.

 

Listen at Tech Policy Press

 

Los Angeles Times: Company and Investor Engagement Manager Jan Rydzak discussed Musk’s interest in buying the company for the purported purpose of defending free speech on the platform, saying: “There’s an enormous irony that in doing so he would render himself unaccountable to shareholders and the broader public. That entire vector of influence that responsible investors have over a company would completely vanish.”

 

Read More in the LA Times

 


Support Ranking Digital Rights!

If you’re reading this, you probably know all too well how tech companies wield unprecedented power in the digital age. RDR helps hold them accountable for their obligations to protect and respect their users’ rights.

As a nonprofit initiative that receives no corporate funding, we need your support. Do your part to help keep tech power in check and make a donation. Thank you!

Support Us

Subscribe to get your own copy.