Digital platforms

Google, LLC

Rank: 4th
Score: 48%

Headquartered in the United States, Google is a subsidiary of Alphabet Inc. The company offers some of the world's most popular internet-related services and products. Google commands 90% of the search engine market share worldwide. Its video-sharing service, YouTube, is visited by more than two billion logged-in users each month. And its email service, Gmail, has more than 1.5 billion users.

Google placed fourth among digital platforms in the 2020 RDR Index. In 2020, Google announced several measures to prevent the spread of misinformation both before and after the U.S. presidential election, including banning political ads. Yet the company’s video-sharing service, YouTube, was widely used by people spreading false claims about election results. Google also faced intense antitrust scrutiny and was sued by the U.S. Justice Department for anticompetitive and exclusionary practices in the search and search advertising markets. Despite facing increased public criticism on several fronts, Google made marginal progress overall in the 2020 RDR Index.

Key takeaways

  • Google gave no evidence of conducting human rights impact assessments related to its own policy enforcement, on its development and use of algorithmic systems, or on its targeted advertising policies and practices.
  • Google did not explicitly commit to uphold human rights in its development and deployment of algorithmic systems and lacked transparency about how it develops and deploys these systems.
  • Google was less transparent about its security policies than many of its peers and failed to disclose anything about its policies for handling data breaches.

Key recommendations

  • Publish a commitment to uphold human rights in developing and using algorithms. Google should adopt human rights-centered principles and frameworks to guide the development and use of algorithmic systems.
  • Improve human rights due diligence. Google should more systematically address the impacts of its own policy enforcement, targeted advertising practices, and algorithmic use and development through robust human rights impact assessments.
  • Increase transparency of data inference practices and collection of user information from third parties. Google should provide sufficient transparency and user control over data inference, so that users can predict, understand, or refute data inferences. The company should respect user-generated signals to opt out of data collection and provide information about its practices with regard to user information it collects from third parties through contractual means.

Services evaluated:

The 2020 RDR Index covers policies that were active between February 8, 2019, and September 15, 2020. Policies that came into effect after September 15, 2020 were not evaluated for this Index.

Scores reflect the average score across the services we evaluated, with each service weighted equally.

  • Lead researchers: Veszna Wessenauer, Jan Rydzak

Changes since 2019

  • Google improved its explanation of how it enforces content rules for Android and Gmail.
  • A statement that had previously appeared on YouTube’s Account Termination page, asserting that users are notified by email when their accounts are restricted due to content rule violations, was no longer accessible.
  • Google introduced a new policy that, in some cases, will require users to provide a government ID to open a Google Play Developer account. This will weaken users’ ability to stay anonymous—an essential option for activists, journalists, and other civil society actors in authoritarian or otherwise illiberal countries.
-0.8 points

Lost -0.8 points on comparable indicators since the 2019 RDR Index.

Governance54%
Freedom of expression46%
Privacy48%

We rank companies on their governance, and on their policies and practices affecting freedom of expression and privacy.

Governance 54%

Google once again lagged behind its peers in the governance category. It disclosed less about its governance and oversight over human rights issues than its peers in the Global Network Initiative.

  • Commitment to human rights: Google made explicit commitments to privacy and freedom of expression and information but did not publish a clear commitment to human rights in its development and use of algorithmic systems. Google had a set of ethical principles that the company applies in developing and using AI, including that the systems be "socially beneficial" and not lead to bias or discrimination. But these principles do not make an explicit commitment to use human rights standards as the primary framework guiding how it develops and deploys algorithms (G1).
  • Human rights due diligence: Google lacked evidence of conducting robust human rights due diligence on key aspects of its operations, including on possible human rights harms associated with its use of algorithmic systems and advertising-based business models. While it disclosed that it conducts risk assessments on some aspects of the regulatory environments in which it operates, it disclosed no evidence of assessing freedom of expression and information, privacy, and discrimination risks associated with the enforcement of its own policies, its targeted advertising policies and practices, and its use and development of algorithmic systems (G4).
  • Stakeholder engagement: Google is a member of the Global Network Initiative (GNI), a multistakeholder organization. However, GNI focuses primarily on government demands and does not address a wider set of human rights issues that internet users face (G5).
  • Remedy: Google failed to disclose clear, predictable, and accessible grievance and remedy procedures (G6a). YouTube users have the ability to appeal Community Guidelines actions, but they are offered no clear explanation of how the appeals process actually works (G6b).

Freedom of expression 46%

Google earned the second-highest freedom of expression and information score among digital platforms after Twitter, but it failed to provide clear evidence of enforcing its rules, including for ad content and bot policies.

  • Content moderation: Google was transparent about its rules regarding what is and is not allowed on its platform (F3a) but was not fully transparent about the processes it uses to identify content or accounts that violate the company’s rules and the role of algorithmic systems in that process. Google’s transparency about the actions it took to enforce its terms of service was inconsistent (F4a). YouTube's Community Guidelines Enforcement Report disclosed the number of YouTube videos removed for terms of service violations. While YouTube indicated that it restricts content and accounts in various ways, like removal or age-restriction, the report did not include numbers for all types of restrictions. The company also did not publish rules governing the use of bots on YouTube (F13).
  • Algorithmic use and content curation: Google explained how the algorithmic ranking systems are used for Search but gave no indication of whether users can opt in to these systems (F12). YouTube’s Help page made broad references to the use of algorithms for recommending content to its users (F1d), but none of Google’s other services offered information about how their algorithms work.
  • Advertising content and targeting: Google’s ad content and targeting policies were easy to find and understand for most of its services (F1b, F1c). Policies described what types of ad content and targeting parameters are prohibited, but were not clear about how breaches are detected or reported (F3b, F3c). Google shared some data about the volume of ad content it removes, but it did not divulge how many of those ads were removed due to content or targeting rule violations (F4c).
  • Censorship demands: Google remained one of the strongest platforms on reporting government censorship demands (F5-F7). It disclosed more about its processes for handling these demands, and data detailing its compliance with them, than any other company, apart from Twitter.

Privacy 48%

Google placed fifth on privacy among digital platforms we evaluated. It stood out for strong transparency of government demands for user information but was less transparent about its data handling and security policies.

  • Handling of user data: Google was clear about what user information it collects and how (P3a) but revealed less about what data it infers (P3b). Google failed to disclose a policy describing how its algorithmic systems are developed (P1b), and it gave users no options to control how their information is used to develop them (P7).
  • Government and private demands for user data: Google remained one of the most transparent digital platforms about how it handles government requests for user information (P10a, P11a). Like other U.S. companies, it did not divulge the exact number of requests received for user data under the Foreign Intelligence Surveillance Act or National Security Letters, or the actions it took in response to these requests, since it is prohibited by law from doing so. Google explained why it sometimes shares user information in response to private requests (P10b), but it did not explain its processes for responding to such requests nor did it publish any data about its compliance with these requests (P10b, P11b).
  • Security: Google was transparent about ways in which users can keep their accounts secure (P17), but it revealed no information about what actions it would take to address potential data breaches (P15). The company disclosed that it encrypts user traffic by default, but it did not disclose if users can enable end-to-end encryption with their private content or communications for Gmail, YouTube, or Google Drive (P16).