2020 Recommendations

If the internet is to be designed, operated, and governed in a way that protects and respects human rights, everyone must take responsibility: companies, governments, investors, civil society organizations, and individuals. Below are our recommendations for companies and governments.

Recommendations for companies

All companies evaluated in the RDR Index can make many improvements immediately, even in the absence of legal and policy reform. No company covered by the RDR Index is headquartered in a jurisdiction that enables the full range of actions companies should take to respect and protect users’ human rights. But companies should strive to be better stewards of the online spaces and services they control, and push for reform whenever the path to fully respecting human rights is blocked by regulatory barriers.

For companies that are committed to respecting freedom of expression and privacy as human rights, the RDR Index indicators offer clear standards to follow. To fulfill their responsibilities under the UN Guiding Principles on Business and Human Rights, companies should take measures in four key areas outlined below.

Commit to and implement robust human rights governance

Companies demonstrate a commitment to protect and respect freedom of expression and privacy by publishing a strong human rights policy. They must back up those commitments by establishing board oversight and comprehensive due diligence mechanisms that identify how freedom of expression and privacy may be affected by the company’s full spectrum of operations, and that ensure that the company works to maximize the protection of users’ human rights. Companies should:

  • Conduct human rights due diligence: Companies should conduct comprehensive due diligence on all aspects of their business that may affect users' human rights. They should broaden the scope of their human rights impact assessments to encompass government regulations and demands; the enforcement of their own policies (including the accuracy and impact of decisions to restrict content); the development and deployment of algorithmic systems and targeted advertising; and participation in zero-rating programs. In each area, companies must clearly specify that they evaluate impacts on freedom of expression and information, privacy, and non-discrimination. Companies should significantly strengthen existing due diligence, including by feeding it back into decision-making by senior leadership and contracting with independent third parties who can assure its quality. The results of these assessments should be published, to the extent that this does not jeopardize human rights.
  • Strengthen human rights oversight: Companies’ boards of directors should exercise direct oversight over risks related to user security, privacy, and freedom of expression and information. To that end, board membership should include people with expertise and experience on human rights issues related to digital rights. Boards should also ensure that due diligence, remedy processes, and stakeholder engagement are effective enough to address and mitigate human rights impacts and risks.
  • Strengthen commitments to the governance of privacy: Companies should implement effective governance and oversight of risks to users’ privacy posed by governments as well as by all other types of actors who may gain access to their information, whether lawfully or unlawfully. They should commit to protecting and advancing strong encryption standards and to informing their users in the event of a data breach. Finally, they should provide accessible, predictable, and transparent grievance and remedy mechanisms that ensure effective redress for violations of privacy.
  • Engage with affected stakeholders: Companies should engage with individuals and communities at greatest risk of experiencing human rights violations and who are historically known to have been targets of persecution in their societies. They should actively incorporate the voices of those most at risk of harm from hate speech, disinformation, and other malicious speech. Working with these individuals and groups, they should create new processes for identifying risks, mitigating harm, expressing grievances, and providing meaningful remedy, as well as develop terms of service and enforcement mechanisms that maximize the protection and respect of all users’ rights. Finally, companies should join or encourage the creation of independent multi-stakeholder organizations with strong accountability mechanisms and a scope covering the full spectrum of areas in which companies’ activities can cause or contribute to human rights harms.
  • Offer effective grievance and remedy mechanisms: Users need to be able to report harms and seek remediation when their freedom of expression, privacy, or other rights are violated in connection with using a company’s platform, service, or device. Reporting a harm requires knowing it occurred. Companies must provide meaningful notice, as well as a credible path for appeals and resolution. In keeping with the Santa Clara Principles for content moderation and terms of service enforcement, companies should give notice to every user whose content is taken down or account is suspended, explain the rationale or authority for the action, and provide meaningful opportunities for timely appeal.

Maximize transparency

Companies should disclose comprehensive and systematic data and other information that enables users—as well as researchers, policymakers, investors, civil society, and other third parties—to have a clear understanding of how platforms and services restrict or shape speech and how they assess, mitigate, and provide redress for risks to users. In particular, they should:

  • Publish transparency reports on the enforcement of their rules: Such reports should be released regularly and include comprehensive data on the volume and nature of content that is restricted, blocked, or removed; what kinds of restrictions the company put in place; and why. Companies should avoid aggregating any of this data and should pursue the same high level of detail for every country in which they operate.
  • Regularly report on demands from governments and other third parties: Companies should publish comprehensive information about their processes for reviewing censorship demands, demands for user data, and network shutdowns (in the case of telecommunications companies). For each of these categories, they should release detailed data on the demands they receive, including the number, nature, and legal basis of demands made, as well as the agency or entity making them. They should disclose any legal reasons preventing them from being fully transparent in these areas. They should also commit to notifying users when their data has been requested or provide legal justification when they are unable to do so.
  • Demonstrate a credible commitment to security: Companies should implement privacy policies that offer the highest possible protections in all of the markets in which they operate, respecting the human rights of all users equally, regardless of geographic location. They should disclose as much as possible about whether and to what extent they follow industry standards of encryption and security, conduct security audits, commit to proactively informing the public about data breaches, monitor employee access to information, and educate users about threats.
  • Commit to resisting shutdowns and preserving network neutrality: Telecommunications companies should publish commitments to push back against network shutdown orders; notify users in the event of government demands for shutdowns; and report disaggregated data on how many demands they received, from which legal authorities, and how many they complied with. Companies should also clearly commit to network neutrality and demonstrate that they uphold this commitment by avoiding practices such as zero rating.

Give users meaningful control over their data and data inferred about them

Transparency is the first step to increasing user agency, but there are other concrete measures companies can take to give users meaningful control, including providing clear options for users to decide not just whether and how their data is used, but whether it is collected in the first place and for what purpose. Companies should:

  • Commit to data minimization and clearly disclose what data is collected: Companies should clearly disclose to users the full life cycle of their information—from collection, to use, to sharing, to retention and deletion—and specify the purposes for collecting and sharing that information. Companies should collect only the data that is necessary to fulfill a clearly specified purpose. Companies should clearly disclose any data collected from or shared with third parties and how that data is obtained. Companies should also provide users with access to this information, including any information used to make inferences or predictions about them, in a structured format.
  • Be fully transparent about third-party data collection: Companies should clearly explain what data they collect about their users from third-party sources. Companies that track users around the web with cookies, web beacons, or other means should clearly disclose these practices to users. They should respect user generated signals to “opt out” of being tracked. Finally, they should be fully transparent about what user information they collect from third-party data brokers as well as how and why they collect it.
  • Let users opt in; do not force them to opt out: Companies should supply users with the information they need to give meaningful consent for how their data is managed. Whenever companies aim to use personal data to develop their algorithmic systems, targeted advertising practices, or other components of their operations, they should let users choose to opt in rather than placing the burden on them to opt out, and be clear about how they can do so.

Account for harms that stem from algorithms and targeted advertising

Companies should maximize their transparency on the development and deployment of algorithmic systems and targeted advertising, publish and continually update their policies to specify where they are used and what rules govern them, and release data relevant to the protection of human rights in both areas.

  • Demonstrate algorithmic accountability: Companies should commit to following international human rights standards in developing and using algorithms. They should also publish comprehensive policies describing how algorithms are used and developed across their services. These policies should be reinforced by clear disclosure of the roles that algorithms play in critical areas of the company’s operations, such as content moderation. If automation is used in any way to enforce the company’s policies, these cases should be embedded in the company’s transparency reporting. If algorithms are used in areas with widely recognized, salient human rights risks, such as in ranking or recommendation systems, companies should be clear about the variables that influence them and provide users with accessible ways to control their use.
  • Come clean about targeted advertising: Companies should publish advertising content and targeting policies that lay out not only what advertising is permitted and prohibited, but also how the company detects rule violations and enforces the rules. The use of targeted advertising systems should be underpinned by strong human rights due diligence processes that evaluate them for bias and potential discriminatory impacts, in addition to other human rights harms mentioned above. Companies that serve users ads on their platforms should publish universal, publicly accessible libraries of all paid advertisements on their properties, for all countries, including relevant targeting parameters, to the extent that doing so will not violate human rights. They should also report on advertising that they removed for violating their policies. Those that already report scattered ad enforcement data should expand their reporting and ensure it covers all of the categories in which they restrict ads.

Recommendations for governments

Governments have an essential role to play in helping companies meet their human rights obligations under the UN Guiding Principles on Business and Human Rights. It starts with upholding their own duty to protect human rights and enabling citizens to hold them accountable for how they exercise power over online speech and personal data. Then, they must foster policy environments that encourage companies to improve protections and respect for individuals’ privacy, freedom of expression and information, and freedom from discrimination.

In 2021, a new administration in the U.S., a new legislative package in the European Union, a forthcoming privacy law in China, and an array of pending laws and regulations in other jurisdictions present a tremendous opportunity to develop regulation that holds companies accountable for their growing power and keeps human rights at its center. To realize this opportunity, we recommend that governments take measures in five key areas outlined below.

Enshrine human rights standards in relevant law

Governments should ensure that domestic laws and their implementation are consistent with international human rights standards. All laws affecting online speech or the use and sharing of personal data must adhere to human rights standards. At the same time, governments should avoid enacting laws that compel companies to violate or facilitate the violation of users’ rights to freedom of expression or privacy. Any restriction of the right to freedom of expression or the right to privacy must be prescribed by law, necessary to achieve a legitimate aim (consistent with human rights standards), and proportionate to the aim pursued. Government agencies that enforce and implement laws must be subject to robust and effective oversight. Governments should:

  • Assess human rights impact of legislation, rules, and directives: While requiring companies to conduct assessments, governments should also be required by law to conduct human rights impact assessments on proposed regulation of online speech and data protection. In particular, laws must limit platform liability for third-party contentand be consistent with international human rights instruments and other international frameworks, as outlined by the Manila Principles on Intermediary Liability. They must also provide comprehensive data protection, including mechanisms for enforcing such protections and measures to address the negative effects on freedom of expression and privacy of algorithmic systems and targeted advertising.[1]

Data protection laws should:

  • Mandate data minimization and require disclosure about what data is collected: Companies should be required to clearly disclose to users the full life cycle of their information—from collection, to use, to sharing, to retention and deletion—and specify the purposes for collecting and sharing that information. Companies should collect only the data that is necessary to fulfill a clearly specified purpose. Companies should also clearly disclose any data collected from third parties and how that data is obtained, as well as provide users with access to this information, in a structured format.
  • Require disclosure about data inference: Governments must require companies to clearly disclose what information they infer about users and how they infer it, including the use of big data analytics to generate assumptions about user preferences or attributes (such as race, gender, sexual orientation) and opinions (including political opinions), as well as to make predictions about user behavior. Companies must also provide users with access to any information used to make inferences or predictions about them, in a structured format.
  • Give users meaningful control over their data: Governments should ensure that companies give users meaningful control over the collection and sharing of their information, including information inferred about them, and to clearly disclose how users can exercise such control. Companies should be required to obtain opt-in consent from people before using their information for content-shaping, targeted advertising, tracking, or profiling.
  • Mandate disclosures of data breach policies: Governments should ensure that companies implement and disclose appropriate policies and procedures for handling data breaches, and to notify users when their data has been compromised.
  • Forbid discrimination: Companies should be prohibited from targeting individuals on the basis of personally identifying information or information they have not voluntarily disclosed. Companies should be required to obtain active user consent to target users on the basis of any audience category or profile attribute. Restrictions should also prevent companies from allowing ad targeting in ways that can be discriminatory, in violation of national and international laws.
  • Support encryption and reform surveillance laws: Surveillance-related laws and practices should be reformed to comply with the 13 “Necessary and Proportionate” principles, a framework for assessing whether current or proposed surveillance laws and practices are compatible with international human rights norms. Lawmakers must pass rights-respecting surveillance reform that does not weaken or undermine encryption standards; ban or limit users’ access to encryption; require companies to provide “back doors” or vulnerabilities that allow for third-party access to unencrypted data; or require companies to hand over encryption keys.

Commit to corporate governance reform and mandate robust oversight

Governments should ensure that the power to restrict online speech or access personal data is subject to meaningful oversight against abuse of censorship and surveillance power. Without credible oversight, government measures to address harmful and malicious activities via private platforms and services, or to address other social, economic, and security challenges, will be plagued by public and industry mistrust.

  • Require strong corporate governance and accountability for human rights impacts: Companies should be required by law to implement board oversight of human rights impacts of their business—by ensuring adequate human rights expertise at the board level—including risks associated with targeted advertising and algorithmic systems. Systematic internal and external reporting can provide accountability for this oversight. Requiring companies to disclose nonfinancial information about their environmental, social, and governance (ESG) impacts, including what percentage of their revenue comes from targeted advertising, would strengthen such mandates.
  • Require companies to conduct human rights due diligence and disclose risks: Governments should compel companies to assess potential human rights impacts that could occur in relation to the use of the company’s platform, service, or device in any market in which they operate. Companies should also be required to disclose their findings, including those related to freedom of expression and privacy, and the steps they are taking to mitigate any risks identified. In such disclosures, they should protect the privacy of those affected or potentially affected, especially those who are most vulnerable.
  • Ensure effective and independent oversight: Any government bodies empowered to flag content for removal by companies, or empowered to require the blockage of services, or to compel network shutdowns, must be subject to robust, independent oversight and accountability mechanisms to ensure that the power to compel companies to restrict online speech, suspend accounts, or shut down networks is not abused in a manner that violates human rights.
  • Empower investor and shareholder oversight: Governments must help elevate the voices of ESG investors and shareholders in corporate governance by making it easier for them to file resolutions. They must also discourage dual-class share structures and other practices that give more weight to votes by owners and insiders than to those cast by other shareholders.

Model and require maximum transparency

Governmentsshould lead by example on transparency and publish regular and accessible data disclosing all requirements and demands made by governments affecting users’ freedom of expression and privacy. They should also require that companies disclose meaningful and comprehensive information about the full range of enforcement actions they take that may affect users’ freedom of expression or privacy. Governments should:

  • Require government transparency by law: Governments should publish relevant and accessible data disclosing the volume, nature, and purpose of all demands made by government entities (national, regional, and local) that result in the restriction of speech, access to information, or access to service, or that aim to compel companies to hand over or otherwise provide access to user data.
  • Require corporate transparency by law: Companies should be required to include information about their policies for policing speech, as well as data about the volume and nature of content that is restricted or removed, or accounts deactivated, for any reason.

In addition, governments should:

  • Require companies to disclose how they receive and handle government demands and third-party requests to access user information or restrict speech or information flows, as well as the impact of these policies and practices on the human rights of their users.
  • Require companies to disclose internal rules and enforcement over content and ad policies, including policies for ad targeting, as well as rules and processes for the development and use of algorithms and algorithmic systems.
  • Require companies to maintain universal, publicly accessible libraries of all paid advertisements on their properties, for all countries, including relevant targeting parameters, to the extent that doing so will not violate human rights.

Ensure adequate access to remedy

People have a right to meaningful and effective remedy, including legal recourse, when their rights are violated. Governments should ensure that individuals have a clear right to legal recourse when their freedom of expression or privacy rights are violated by any government authority, corporate entity, or company complying with a government demand. Companies should also be required by law to provide accessible and effective grievance and remedy mechanisms for people who believe that their rights have been violated in connection with the use of a company’s products and services.

Engage with a diversity of stakeholders

Governments must work with civil society, companies, and other governments to develop and enforce effective, constructive regulation that prioritizes the human rights of all internet users. They must:

  • Uphold commitments made in multilateral and multi-stakeholder forums: State members of the Open Government Partnership, an organization dedicated to making governments more open, accountable, and responsive to citizens, and the Freedom Online Coalition, a partnership of 32 governments working to advance internet freedom, should lead by example and align their legal frameworks and policy responses with human rights and the commitments they have made in these forums.
  • Collaborate globally: Governments that are committed to protecting freedom of expression and data protection should work proactively and collaboratively with one another, as well as with civil society and the private sector, to establish positive guidelines for minimizing online harms without infringing on human rights.

Footnotes

[1]For more detailed recommendations for policymakers related to these systems, please see our 2020 It’s the Business Model series.

Support Ranking Digital Rights!

Tech companies wield unprecedented power in the digital age. Ranking Digital Rights helps hold them accountable for their obligations to protect and respect their users’ rights.

As a nonprofit initiative that receives no corporate funding, we need your support. Help us guarantee future editions of the RDR Index by making a donation. Do your part to help keep tech power in check!

Donate
Read more:
Executive summary

Top takeaways from our 2020 research

Key findings

Companies are improving in principle, but failing in practice

China’s tech giants

China’s biggest tech companies have proven they can change. But the state is still their number one stakeholder.