How GLAAD is keeping social media companies accountable through its Social Media Safety Index and its new Platform Scorecard.

Last month, hate and harassment of LGBTQ social media users made headlines when actor Elliot Page was deadnamed and misgendered on Twitter by conservative author and academic Jordan Peterson. Peterson’s account was suspended for violating Twitter’s hateful conduct policy, which prohibits targeted deadnaming—referring to someone’s name prior to transition—and misgendering. (Twitter is one of only a few social media platforms, including TikTok and Pinterest, with such a policy.) The past year has shed tremendous light on the hate and harassment faced in online environments by members of the LGBTQ community, thanks to widespread media coverage and research. According to a recent report by the Anti-Defamation League (ADL), two-thirds of LGBTQ respondents say they have experienced harassment online.

The digital rights community has long called on social media platforms to do more to create a safer online environment for LGBTQ people and other vulnerable communities. With GLAAD’s new 2022 Social Media Safety Index (SMSI) report and its Platform Scorecard, the community now has a tool in hand to hold companies to account for how their publicly disclosed policy commitments impact LGBTQ expression, privacy, and safety online.

For this first-ever Scorecard, Goodwin Simon Strategic Research (GSSR), the independent public opinion research firm, partnered with GLAAD and RDR to create an accountability tool building on RDR’s rigorous methodology and best practices. GLAAD’s inaugural SMSI report in 2021 laid bare the existing state of LGBTQ safety, privacy, and expression on the platforms. It also set forth the eventual goal of evaluating the platforms using a standardized scorecard. Given the critical role that RDR plays in holding major tech companies accountable for respecting user rights, GLAAD looked no further than RDR’s standards and best practices when setting out to develop the new 2022 SMSI Scorecard.

Across 12 indicators, GLAAD assessed how Twitter, Instagram, Facebook, YouTube, and TikTok’s publicly disclosed policies impact their LGBTQ users. These indicators draw on best practices and guidelines, as well as feedback from RDR, while more directly addressing issues impacting LGBTQ users. For example, the first indicator looks at companies’ disclosed policy commitments to protect such users (for more details, you can read our one-pager describing our scorecard development and see the full list of indicators).

This first iteration of the Platform Scorecard this year shows that leading social media platforms fail at adequately protecting their LGBTQ users. None of the five companies that GLAAD evaluated had a combined score, across all indicators, of more than 50%. In the report, GLAAD highlights several areas where tech companies need to do better. Of note: There is a clear lack of transparency across the board. Some of the most glaring findings include:

  • Companies lack transparency about what options users have to exert control over whether and how information related to sexual orientation and gender identity is collected, inferred, and used by platforms to draw conclusions about LGBTQ people’s identities.
  • In particular, companies disclose little regarding what control users have over whether they are shown content based on this information. Users should not be shown content based on their gender identity or sexual orientation unless they explicitly opt-in.
  • Companies also lack transparency about the steps they take to address demonetization and wrongful removal of legitimate LGBTQ-related content from ad services. This means, for example, that when LGBTQ creators’ content is suppressed or removed by platforms, the companies share little information with the creator explaining why.
  • While all of the companies we evaluated claim to engage with organizations representing LGBTQ people, none of the companies disclose the appointment of an LGBTQ policy lead to ensure that the companies’ policies reflect the true needs of LGBTQ users.
  • Twitter and TikTok are currently the only two platforms GLAAD evaluated that have policies prohibiting targeted deadnaming and misgendering. TikTok adopted this prohibition in response to the release of the 2021 SMSI. (It’s worth noting that the general lack of transparency from platforms means that the SMSI cannot assess the companies’ enforcement of stated policies, including this one.)

For each of the companies in the report, GLAAD lays out clear policy recommendations that companies should implement in order to create a safer online environment for their LGBTQ users. For instance, social media companies should emulate Twitter and TikTok by prohibiting targeted deadnaming and misgendering. Other recommendations include a ban on potentially harmful/and or discriminatory advertising content, the disclosure of training for content moderators, and a commitment to continuously diversifying the company’s workforce. As mentioned, companies should also hire an LGBTQ policy lead as part of their human rights teams to oversee the implementation of these policy commitments and ensure that they are truly reflective of users’ needs. Following such recommendations would not only allow companies to create a safer online environment for the LGBTQ community, but would also potentially lead the way on progress for other vulnerable communities and under-represented voices. For example, a commitment to diversifying a company’s workforce would help make sure that people of color, people with disabilities, as well as other groups are represented within the company and included in the development and implementation of the company’s policies, products, and services.

Thanks to GLAAD’s SMSI and Platform Scorecard, the digital rights community is now able to track companies’ progress on commitments to policies meant to protect LGBTQ users. GLAAD is holding ongoing briefings with each platform to review issues that LGBTQ users face and advocate for the recommendations described in the report. GLAAD’s Social Media Safety program maintains an ongoing dialogue about LGBTQ safety amongst tech industry leaders. It also spotlights new and existing safety issues facing LGBTQ users in real-time, both to the platforms and to the press and public.

The threats that LGBTQ users, as well as those from other vulnerable communities, face online are many and are constantly evolving. Therefore, GLAAD hopes that the SMSI will continue to expand and grow to include indicators on other pressing LGBTQ-related policy issues, including disinformation on gender-affirming care. The Scorecard has an important role to play in helping to create an online environment that might finally allow LGBTQ users to express themselves both fully and safely online.

Read the full GLAAD Social Media Safety Index and Platform Scorecard.

RDR’s 2022 Big Tech Scorecard underlined the dire state of privacy among digital platforms. None of the 14 companies we ranked topped a score of 60 percent, and privacy was the lowest-scoring of our three categories (the others being governance and freedom of expression and information). We have been calling for federal privacy legislation for years, highlighting it as an essential first step to reducing companies’ rampant data collection, and mitigating the harmful surveillance capitalist business model it supports.

We’ve written op-eds, revised our methodology, produced stand-alone reports, given academic presentations, submitted comments to federal agencies and to the United Nations, and we’ve participated in congressional testimony. We have steadily made the case that government-enforced privacy protections are  a cornerstone of holding Big Tech to account for not only users’ rights but also healthy information ecosystems. This is why we enthusiastically endorse the American Data Privacy and Protection Act (ADPPA).

On Wednesday, the House Energy and Commerce Committee voted 53-2 to advance the bill, also known as H.R. 8152, thus clearing the way for a floor vote. This is the first time that a comprehensive federal privacy bill has made it this far in the legislative process. This victory comes after years of negotiations between House and Senate leaders of both parties. It was also informed by intense lobbying from industry groups eager for a federal standard that would preempt robust state laws, notably California’s recently passed privacy acts. Against long odds, however, the resulting ADPPA is actually stronger than the California privacy laws. Among the most notable improvements to the status quo are:

  • No more notice-and-consent: The ADPPA finally ends the broken “notice-and-consent” paradigm, where to use a service, internet users are compelled to accept the company’s terms, whatever they are. Unfortunately, most policies are far too legalistic and onerous for users to actually read through and often fail to fully disclose all data practices. Instead, the ADPPA centers the concepts of data minimization and purpose limitation. This provides users with positive rights over their data through direct limitations on how it can be used, collected, and shared. In fact, companies would only be allowed to collect data for one of the 17 purposes laid out in the bill. Any other data collection would simply be prohibited.
  • Individual data rights: The ADPPA allows users themselves to access, correct, delete, and export their data directly to competing services.
  • Civil rights protections: Thanks to years of advocacy from civil and human rights organizations, the ADPPA includes civil rights protections from online discrimination based on demographic and behavioral data—one of surveillance advertising’s most direct harms. It would be the first piece of legislation to explicitly extend civil rights protections to the digital realm. This would include, among other things, mandating that companies correct for algorithmic discrimination.
  • Surveillance advertising: The ADPPA bans targeted advertising for under-17s; prohibits targeting based on specific categories of sensitive data, which include health information and geolocation; and creates a global opt-out mechanism from targeted advertising for everyone.
  • Impact assessments: The ADPPA requires “large data holders” (companies with annual revenues of at least $250 million that collect data on more than 5 million people) to conduct impact assessments on both their privacy and algorithmic practices. These assessments must include details about how the platform is mitigating potential harms. The algorithmic assessments would have to be submitted to the FTC.
  • Enforcement: The bill creates a new Bureau of Privacy at the Federal Trade Commission to enforce the law. It would empower state authorities (notably attorneys general and state privacy agencies like the California Privacy Protection Agency) to bring forward enforcement cases as well as provide a private right of action for many violations. Enforcement was a major sticking point in the drafting process, and has continued to be one since the bill’s introduction in June. We expect to see further amendments on enforcement as the ADPPA winds its way through the legislative process.

Would we be even happier with a stronger bill that ends surveillance advertising once and for all? Yes, of course. Legislators from both parties and both chambers, as well as privacy advocates, should continue working together to improve the bill’s protections. But we should not scuttle a law that will provide significantly more privacy protections while doing a good bit to rein in surveillance-based advertising in service of an ideal that remains out of reach. We can’t ignore that this will likely be the last chance to pass federal privacy legislation under the Biden Administration.

Although this bill may not alone spell the end of the ad tech business model, it will make notable progress by limiting the endless data collection upon which the model sustains itself and by giving millions of people protections they currently lack. This is a major first step, and it’s far past time we take it.

 

 

As a researcher at Temple University in Philadelphia, I study how stories about social problems and their possible solutions make their way into the mainstream media, how they get covered, and how audiences respond to them. One key question I look at is: How do we get news audiences to understand social issues, care about them, and become informed about potential policy solutions to very complex problems?

Given that part of RDR’s mission is to influence corporations by having their behavior around privacy and freedom of expression covered in the media, I was thrilled when I was hired to conduct an evaluation of the RDR Index four years ago. This was an opportunity to dig into research questions around how the news can work to promote more comprehensive coverage of issues related to social justice.

As part of this evaluation, I interviewed 14 civil society organizations about their use of the RDR Index, what they saw as its strengths and weaknesses, and how both it and other human-rights-related rankings could help push forward social movements and bring social issues into the media and into public conversation. In other words, I wanted to know: How do we take numbers, numerical rankings, and indicators (things that might strike some as academic, esoteric, or just plain boring) and turn them into something that can impassion the public and spur social change?

Through these interviews, I came to a few conclusions that I’ll share below. If you want to read my full write-up of this, you can do so. But in short:

  1. Indices like RDR’s offer three critical resources for activists trying to get their narratives into the mainstream media: legitimate information, newsworthy information, and flexible information.
  2. Activists find it really hard to use data from these indices (but we can fix that!).

Legitimate information refers to the idea that journalists typically need their sources of information to be seen as objective and reliable. This poses a problem for social movement actors who are often perceived as biased since they have a clear stance on current societal problems and are advocating for a particular course of political action. Index data—if based on a rigorous, transparent methodology and created by organizations seen as credible, such as RDR—can give activists trying to get their stories into the media an increased perception of objectivity and legitimacy.

Newsworthy information refers to the potential overlap between stories that civil society organizations (CSOs) want to get into the public sphere and stories that news outlets are interested in publishing. News organizations do need and want evidence that social problems exist, especially if evidence can point to actors or organizations who are misbehaving. But the investigative reporting needed to uncover this information can be costly for news outlets. The public still values investigative journalism but shrinking newsroom budgets mean the total number of issues investigated is declining. Activists bringing this evidence to journalists is therefore a win-win for both sides: Activists can combine their understanding of the issue with numerical data and analysis and journalists can shine a spotlight on how particular actors are negatively impacting human rights.

Flexible information refers to the idea that numbers do not represent a black and white version of truth or an incontrovertible version of reality. Numbers, including rankings, are simply descriptive. They describe, or “indicate,” how a particular organization performs on a particular metric. They are in many ways meaningless  until someone explains their significance and ties them together to tell a story.

The indicators can thus tell a variety of stories. For example, within the broad category of privacy, the methodology includes 23 indicators that, when taken together, produce a score (which is itself another indicator) regarding how well corporations adhere to human rights principles related to privacy. The scores for each of these 23 indicators (e.g., “Sharing of User Information”) are calculated by aggregating between one and 12 “sub-indicators,” which are called elements (e.g., whether the company discloses sharing information with governments). Altogether, the Big Tech Scorecard is made up of 58 indicators (each with multiple elements), across 14 companies and 43 specific services, resulting in tens of thousands of data points overall for each iteration of the Index.

A variety of activist interests, including privacy rights, freedom of expression, children’s rights, and democracy promotion, can find indicators and sets of indicators within the dataset that help them tell a particular human rights story. For example, while conducting my evaluation, I spoke to one interviewee working on democracy promotion who suggested that the data could potentially be used to tell the story of how a particular company’s score has dropped because it’s gotten cozier with an authoritarian regime and changed its policies accordingly. How any one group uses the data would depend upon the particular political moment, the news environment, the agenda of the organization, and the indicator scores.

The problem? As I just mentioned, the Scorecard produces tens of thousands of data points in each iteration. The organizations I spoke with wanted to use this information, or at least suggested they did, but many were lost on how to do so. These organizations are often small, with an overworked staff, each wearing multiple hats. They can’t possibly also be expected to become sophisticated data analysts. Many therefore wanted RDR to present the data in a different way, or parse the data for them, to help them tell their own stories.

This creates the following paradox: On one hand, the fact that the RDR Index has hundreds of thousands of data points is great because it means there are endless ways to use the information depending on an organization’s goals. In addition, this data provides mountains of rigorous evidence that assists in advancing policy arguments. But at the same time, the perception is that organizations can’t employ the data themselves, which limits the scope of its current usage.

RDR will never be able to produce all the stories that CSOs want, and especially not at the moment they’re needed. Activists are the ones with their fingers on the pulse of what is happening in their areas of expertise and therefore are best placed to know what stories need to come out, when, how, and where. They are best placed to navigate the news and information space, supply journalists with needed information, or respond to news events.

So what is needed now is a way for activists to use the data themselves. The fact that indices are composed of numbers should not make them impenetrable. Contrary to some data skeptics, using indicators does not always require advanced mathematical skills; it is often only a matter of understanding what indicators mean. In the case of the Big Tech Scorecard, for instance, “analysis” might simply mean looking at the scores for a particular indicator of interest (e.g., “Does the company notify users when terms of service change?”) and comparing scores across companies to see who performed the best or worst. With the implementation of such user-friendly suggestions, analyzing the dataset shouldn’t be too hard.

RDR has begun addressing this limitation. The organization has been working to meet civil society and other stakeholders halfway, which has translated into an expansion of RDR’s policy advocacy, investor engagement, and of its guidance for organizations on how best to employ the methodology and standards to highlight the issues they care about, across countries and regions. It will also be launching a new Research Lab which will include trainings to help CSOs learn to navigate the data to fit their own needs. It remains to be seen how this new effort will go, but it is crucial in order to maximize the value of the Index for activists and social movements.

I believe that activists who know their issues best should be writing and informing media stories; it is not only up to RDR. Numerical analysis cannot replace the passion or emotion that breathes life into a movement, but anecdotal stories of wrongdoing can be buttressed with reliable data to strong and positive effect. (A useful example of this are the companion essays that RDR now publishes alongside its Scorecards.) Such data can be used to strengthen the sway of policy arguments made both to policymakers and to news organizations. This is a resource that is therefore both sorely needed and deserving of further attention.

 

Meta released its first-ever “Annual” Human Rights Report last week, looking at the company’s purported progress toward meeting its human rights obligations from 2020 through 2021. This release follows years of criticism from civil society about Meta’s failure to act in response to accusations of ignoring online abuses that have led to real-world abuse across the globe. We at Ranking Digital Rights have consistently highlighted this pressing need, including in our latest Meta Scorecard. In fact, we urged the company then known as Facebook to act all the way back in 2015 when we published our first RDR Index, and we’ve continued to do so ever since.

This report, in other words, has been a long time coming. Although we’re glad it finally arrived, we would have liked to have seen a greater acknowledgement of the existing policies and incentive structures that are stifling the company’s ability to better respond to human rights issues. We’re hoping that the briefing Meta promised as a follow-up to the report will provide us with an opportunity to continue engaging on these issues. Below we dig into the report, and look at the good, the bad, and what’s missing.

First, the good:

  • The report exists!: The fact that the heat on Meta was strong enough to compel the company to produce this report, which makes explicit references to international human rights norms and instruments, is a start. Better late than never!
  • Recognition that rights extend beyond users: Although it’s important that tech companies respect the human rights of their users, the impact of Facebook’s activities reaches far beyond them. Thankfully, this report recognizes that rights-holders include “not only users of our platforms and services, but the many others whose rights were potentially impacted by online activity and conduct.” 
  • Facebook’s Trusted Partners program: Meta discloses the use of “trusted partners,” which includes “over 400 non-governmental organizations, humanitarian agencies, human rights defenders and researchers from 113 countries around the globe.” The stated goal of this program is to help Facebook understand the impact of its policies on at-risk users. While there are good reasons to keep the full list of partner organizations confidential, the company should provide much more information on how this program actually works. Although this is a positive development in theory, there’s far too little transparency about this program to really evaluate it. 

Now, the bad. And there’s unfortunately a good deal of that, with Meta making a whole lot of meaningless and misleading claims in this report

  • Meta starts off the report with its mission statement: “[T]o give people the power to build community and bring the world closer together.” Somehow, according to Meta, this statement is supposed to “strongly” align itself with “human rights principles.” How exactly is that? We’re not so sure. 
  • Next, Meta pays lip service to the concept of a “universal obligation to non-discrimination” as part of their “vision and strategy”: But they want to do this without recognizing that the targeted advertising business model inherently enables and automates discrimination based on demographic and behavioral data. Nor does the report grapple with the discrimination resulting from the uneven way Meta allocates resources toward content moderation in different languages. 
  • Meta, in its own words, is a “mission-driven company where employees are typically aligned with human rights norms. In turn, this consensus leads to a company-wide community that wants to protect and advance human rights.” But there’s no evidence for this claim—we’re supposed to just take the company at its word. And, once again, Meta is making this statement despite using a business model that, as we’ve been saying for years!, is grounded in the violation of the right to privacy. 
  • Ad-policy enforcement barely makes it into the report: Although the company makes over 98 percent of its money from advertising, a discussion of the effects of Meta’s ad content and systems is almost completely absent from its “human rights impact assessments.” And this despite the fact that about 80 percent of Meta shareholders voted this year for a human rights impact assessment (HRIA) of the company’s ad-targeting practices. According to the report, Meta created new AI classifier systems, which they say will allow them  “to enforce bans on violating ads and commerce listings for certain medical products.” This seems to be the only reference to ads throughout the report. (It should be noted that Meta does not release any data whatsoever on how it moderates ads, despite accounting for almost a quarter of all digital ad spending in the United States.) Are we really supposed to believe that surveillance advertising has no impact on the rights to privacy, free expression, and non-discrimination? Meta clearly wants us to think so, but we’re not buying it.
  • Is this really all the human rights due diligence Meta did in two years?: It’s not clear whether Meta has conducted human rights due diligence in countries beyond the ones mentioned (Cambodia, Indonesia, the Philippines, Sri Lanka, and India), or on product features other than end-to-end encryption and Ray-Ban Stories. If not, then why not? If they have, then why are these the only evaluations included in the report? In particular, as many other civil society organizations have pointed out, the full HRIA from India should be made public (allowing for redactions needed to protect civil society actors). We also expected to see a discussion of human rights due diligence around the so-called “metaverse” (which we didn’t).
  • Meta’s Human Rights Policy Team, which was responsible for this report, counted four full-time staff at the end of 2021. A team of only four seems far too small to be able to properly investigate the human rights policy of a company of the size and scope of Meta, even if many other roles also touch on human rights. (Contrast this number with the armies of lobbyists Meta employs around the world.)

Finally, there are a few things altogether missing that really should be there. 

  • There’s no mention whatsoever of the uneven enforcement of content policies across regions, countries, and languages. There is some mention of AI-driven content moderation, but no acknowledgment that these systems are much more advanced for some languages (like English) than others, and don’t exist at all for many others. Meta also makes a vague promise to have “improve[d] our moderation across languages by adding more expertise,” but doesn’t say anything about how this affects its ability to moderate effectively or how human rights are impacted. 
  • Content moderators: There is no mention of the labor rights of Meta’s moderators. The company has already been the subject of a lawsuit over their working conditions from an ex-moderator in Kenya
  • There is no mention of any attempts at data minimization or purpose limitation—two bedrock principles of data protection that are fundamental to the human right to privacy. This is not surprising, given Meta’s voracious appetite for data collection and insistence that its very existence is “in line with human rights principles.”

Again, we’re glad that Meta felt compelled to put out this report and recognized the need to commit to a human rights policy, something we’ve been calling for. Most large tech companies do not produce a human rights report at all. But beyond this, the report fails to actually address the causes of the online abuses that pushed civil society to demand action from Meta in the first place. Many of the issues we’ve highlighted in our past Scorecards, including insufficient attention to content moderation policies, were wholly missing. Furthermore, there isn’t much indication in this report that the company will do what’s needed to address its lack of adherence to human rights principles. But how could there be? The first step to solving a problem is admitting that there is one.

Over the past weeks, Amazon, Meta, Twitter, and Alphabet (Google) all faced a shareholder reckoning. Nearly 50 petitions launched by investors across the four tech giants called on them to come clean on an array of issues. Many of those issues were related to human rights. The topics on the table: surveillance products and their use by government agencies, the corrosive impact of targeted ads, ensuring the safety of warehouse workers, and more. Shareholders voted on all of them.

On the surface, the outcome was disheartening: only two proposals won a majority of shareholder votes, both of them at Twitter. But the raw numbers obscure a much more complex picture.

This year’s wave of investor action on human rights has proven stronger than any in the past. The volume and range of proposals has reached record levels, breaking into issues that the investor community formerly hadn’t explored. A critical mass of shareholders has backed human rights motions, even when it was clear that the artificially outsized voting power of Google and Meta’s corporate leadership would nullify their chances of winning a majority, thanks to the companies’ multi-class stock structures.

There are good reasons to expect that investor-led pressure for corporate accountability will continue to flourish. Let’s dive into them.

Shareholder meetings 101

Shareholder proposals are one of the most powerful tools in the activist investor’s toolbox. They democratize the mechanisms that govern a company’s operations by putting issues raised by investors to a vote. Civil society organizations often support proponents in crafting and amplifying their demands.

The voting process works both as a referendum on the company’s leadership and as a barometer of shareholders’ sentiment about how the company is navigating key issues. Proposals are advisory, but strong support creates a powerful incentive for boards to take action or face further backlash. Losing shareholders’ trust is ultimately a prelude to losing their capital.

Echoing the ongoing boom in ESG (environmental, social, and governance) investing, shareholders this year have hit many tech companies with more proposals than they had ever received. The  17 proposals at Google, 15 at Amazon, 12 at Meta, and five at Twitter all set an all-time company record. Support for those that tackle social and environmental issues has grown rapidly, often hitting more than 30%—a threshold commonly viewed as critical to compel executives to take action.

Most proposals fail to earn an absolute majority, but dragging an uncomfortable issue into the spotlight and keeping it there is a big deal in itself. As recent history shows, sustained pressure pays off. Two years ago, Apple bent to relentless public appeals by both investors and civil society when it published its first human rights policy. Last year, Microsoft promised an independent human rights assessment of its surveillance and law enforcement contracts, in part as a compromise to activist investors. Similar examples abound.

Twitter

Read Overview

Twitter’s annual meeting took place amid continued uncertainty surrounding the sale of the company to Elon Musk, which CEO Parag Agrawal announced would not be discussed at the event. Shareholders scored victories with two proposals demanding more transparency on its use of concealment clauses (such as non-disparagement agreements) and on electoral spending. Both of them won a majority. Another proposal called for the board to appoint a member with human or civil rights expertise. Much like last year, it gained the support of about 15% of shareholders. Like at Meta, a call for a civil rights audit filed by an “anti-woke” conservative group was soundly defeated.

Read more about Twitter’s transparency on human rights issues in the Ranking Digital Rights Big Tech Scorecard.

ProposalVotes for
Report on concealment clauses67.86%
Director with human/civil rights expertise14.76%
Civil rights audit2.21%
Electoral spending report52.66%
Lobbying activities and expenditures40.15%

A cascade of wake-up calls

Seasoned investors have made it clear this year that they are no strangers to the nuances of the human rights issues that affect their holdings. Algorithms, ad targeting, unbridled data collection, and deals involving government actors with a penchant for repression were all up for a vote at tech companies this year.

At Meta, excluding Mark Zuckerberg’s votes, about 80% of shareholders voted for a human rights impact assessment (HRIA) of the company’s ad targeting system. The proposal, which the organization I work for directly supported, underscored that Meta has never revealed any data on the ads it restricts and never offered more than a cursory remark on the topic in any of its previous HRIAs.

The proposal ultimately secured the second highest number of votes of the 12 that were on the table this year—one of the strongest shows of support for a shareholder proposal in the company’s history.

Why does this matter? Because it signals that, in the eyes of investors, the balance between maximizing profits and protecting users’ rights is shifting in favor of the latter. Increasingly, shareholders are not just asking for a reckoning with the impact of specific business decisions, but with the entire architecture of Big Tech. As the proposal’s authors put it, Meta is “nibbling around the edges of a problem instead of looking at the root cause–the overarching systems that govern targeted ads.” In other words, it’s the business model.

Meta

Read Overview

Meta faced 12 proposals, most of them focusing on the impact of Meta’s business model and platform governance. Shareholders called for the company to assess the effectiveness of its content policies in stemming harmful speech, evaluate the impact of expanding encryption on children’s rights, and deploy a human rights impact assessment (HRIA) alongside the development of the “metaverse” project. A proposal calling for a HRIA of Meta’s targeted ad business model won the backing of more than three-quarters of all “independent” (one-vote-per-share) stockholders—one of the strongest results recorded at Meta to date. None of the proposals reached 50% support, largely due to Mark Zuckerberg’s augmented voting power, which allows him to veto all of them every year. Nearly all independent shareholders voted to abandon this structure, but were overruled by Zuckerberg.

Read more about Meta’s transparency on human rights issues in the Ranking Digital Rights Big Tech Scorecard.

ProposalVotes for>50% of independent votes?
Eliminate dual-class shares28.08%YES
Independent chair16.69%

YES
Concealment clauses18.92%

YES
External costs of misinformation2.72%

NO
Report on Community Standards enforcement19.19%

YES
Human rights review of the metaverse2.91%

NO
Human rights assessment of targeted ads23.68%

YES
Report on risk of child sexual exploitation17.22%

YES
Civil rights and non-discrimination audit0.31%

NO
Report on lobbying20.55%

YES
Assessment of Audit and Risk Oversight Committee10.44%NO
Report on charitable contributions9.25%NO

Companies’ expansion into new digital and physical spaces also came under fire. Shareholders challenged Meta’s vision of the future, calling for a human rights review of its plans for  the “metaverse.” A week later, Google’s investors slammed its plan to open cloud regions in human rights hotspots like Saudi Arabia. The company has shown no evidence of the due diligence it conducted in light of the country’s appalling human rights record, which includes brutalizing activists and operating extensive digital surveillance networks.

Neither proposal came close to reaching the 50% support threshold—an unachievable feat, thanks to the company’s multi-class stock structure. But the message was clear: wherever major business decisions have shown their capacity to cause harm, there will be an investor rallying allies to push for accountability.

No escaping civil rights accountability

Three of the shareholder meetings took place on the anniversary of George Floyd’s murder. All of them took place in the wake of the racist massacre at a Buffalo supermarket, which the perpetrator livestreamed on the Amazon-owned Twitch. The mass murder was yet another horrendous touchpoint in a history of systemic violence that technology has often aggravated.

Corporate boards are facing a surge of investor-led rebukes on their lackluster civil rights efforts. Demand for change has skyrocketed since 2020, when Facebook released a damning third-party assessment dissecting how the company’s failure to rein in noxious posts and ads resulted in “significant setbacks for civil rights.”

Amazon

Read Overview

Amazon was hit with 15 proposals, including multiple on labor rights. In a historic first for the company, a warehouse worker (“picker”) filed and presented a proposal for Amazon to investigate warehouse working conditions, winning 38% support. Another worker-related proposal, which won more than a third of the vote, demanded a report on Amazon’s efforts to protect freedom of association amid a rise in unionization efforts. Two proposals asked for a report on the human rights impact of Amazon products and technologies by government agencies worldwide, highlighting the repressive uses of Rekognition (Amazon’s facial recognition system) and Amazon’s cloud services in particular. Both of them won the backing of 35% of shareholders. None of the proposals received majority support, but several strong results will be difficult for the e-commerce giant to ignore.

Read more about Amazon’s transparency on human rights issues in the Ranking Digital Rights Big Tech Scorecard

ProposalVotes for
Climate-linked retirement plan options8.72%
Report on customer due diligence39.99%

Hourly employees as board candidates22.15%

Report on packaging materials48.62%

Report on worker health and safety disparities12.71%

Risks of concealment clauses24.65%

Report on charitable contributions2.69%

Alternative tax reporting17.35%

Report on freedom of association38.57%

Report on lobbying47.03%

More director candidates than board seats0.81%

Report on warehouse working conditions43.74%

Additional reporting on gender/racial pay28.65%

Human rights impact of facial recognition40.42%
End productivity expectations and workplace monitoring0.25%

 

Calls for comprehensive civil rights audits have swept through tech companies, mirroring broader trends. Earlier this year, a group of shareholders celebrated a victorious proposal at Apple demanding a third-party assessment of the company’s impact on civil rights. Amazon announced an audit of its own in April, conceding to a campaign by a group of New York pensions funds that had gained strong momentum.

Shareholders also rallied around a call for a racial equity audit at Google, clinching the fourth strongest result of all the proposals the company faced this year. This petition too grew out of investors’ apprehensions regarding Google’s business model, which has made it the most dominant advertising force on the internet. Journalists had previously revealed that Google’s targeting platform gave white supremacist content a pass while blocking terms related to social and racial justice.

Every share gets one vote (except when it doesn’t)

Unprecedented shareholder pressure has moved companies to try to insulate themselves from it. Case in point: last year, a record 56 tech companies—nearly half of all tech IPOs—went public with structures that granted founders and insiders inflated voting power over ordinary shareholders.

Alphabet (Google)

Read Overview

Google’s parent company faced 17 shareholder proposals. Investors called on the company to carry out a racial equity audit, assess the human rights impacts of opening new cloud regions in states with poor human rights records, and publish new disclosures on Google’s use of algorithms as well how it collects and processes user data. For the tenth consecutive year, shareholders voted on a proposal to abolish Alphabet’s multi-class share structure (see why these are a problem). It won the strongest support of any proposal in the company’s history. None of the motions achieved the 50% threshold, but nearly half of them would have were it not for Alphabet insiders’ inflated voting power.

Read more about Google’s transparency on human rights issues in the Ranking Digital Rights Big Tech Scorecard.

ProposalVotes for>50% of independent votes?
Lobbying report18.94%

YES
Climate lobbying report18.80%

YES
Report on physical risks of climate change17.74%

YES
Report on water management risks22.54%

YES
Racial equity audit22.31%

YES
Report on concealment clauses11.95%

NO
Equal shareholder voting33.16%

YES
Report on government takedown requests0.40%

NO
HRIA of data centers in human rights hotspots16.99%

NO
Report on data collection, privacy, and security12.21%

NO
Algorithm disclosures19.54%

YES
Misinformation and disinformation23%

YES
Report on external costs of disinformation3.51%
NO
Report on board diversity5.25%

NO
Establish environmental sustainability board committee4.74%

NO
Non-management employee representative director2.55%

NO
Report on military and militarized policing agencies policy9.16%NO

 

Now endemic in the tech sector, stock structures with two or more classes (known as dual- or multi-class stock) exist under the premise that “visionary” founders and their allies should have free rein to maximize growth and innovation. In most cases, this gives a small superclass of corporate elites 10 or more times the voting power of regular investors, who generally get one vote per share. This means they can minimize their personal investment in their own company while maximizing their clout. At Snap, shareholders receive no voting rights at all—a deeply undemocratic formula for perpetual corporate power.

When they debuted on the stock market, Meta and Alphabet both baked these structures into their business models. Officially, of the 155 proposals the two companies have jointly received since they went public, shareholders have never approved a single one. In reality, seven out of the 12 proposals shareholders voted on at Meta this year would have won a majority had Mark Zuckerberg not personally blocked them with a single vote. (Amazon and Twitter give each share one vote.)

On May 25, a record 92% of shareholders who were not Zuckerberg voted to terminate Meta’s warped voting structure. At Google, which has a separate class with no voting power, the same proposal won more support this year than any other in the company’s history. Yet in both cases, the very existence of multi-class structures guaranteed the proposals would never secure a majority.

Investors, activists, and academics oppose multi-class share structures almost unanimously. The normative arguments are clear. Outsized voting rights transform an ostensibly democratic process into one that is rigged by design. They entrench unaccountable management while disenfranchising ordinary shareholders. They offload the risks of irresponsible decisions on shareholders. Because  retirement funds are almost certain to include a who’s who of tech companies, the public ultimately pays the price.

But distorted power structures are not inevitable. In the US, the SEC and Congress both have avenues to curb the use of dual-class shares or ban them entirely. And to keep corporate power from spinning out of control, that’s exactly what they should do. In pursuit of this goal, a coalition of human rights organizations led by Ranking Digital Rights has recently sent a letter to the SEC demanding that it put an end to multi-class shares and other structural barriers to shareholder action on human rights.

The spark is lit

A casual observer might look at the success rate of this year’s shareholder proposals and see a string of campaigns that the corporate boards of American tech giants have successfully deflected.

But investors’ willingness to take action on human rights is on the upswing. So is their appetite for partnering with civil society. The groundswell of support for collective human rights statements by investors with trillions of dollars in assets reflects this well.

Shareholder advocacy is no longer an elite domain. Investors and human rights advocates can and must mutually reinforce the specialized power they each possess to trigger good change. If we want Big Tech companies to use their enormous power to support human rights and democracy, or even avoid undermining them, we have to cultivate more open exchanges between these two groups. It’s one of the most promising paths forward.