Methods & Standards

RDR’s methodology offers a road map companies can use to improve. It also provides a tool for researchers, advocates, policymakers, and investors to push companies in the right direction.

Our theory of change is based on the power of benchmarking companies with indicators that set high but achievable standards for corporate transparency and policies that align with internationally recognized human rights standards.

Researchers examine overarching “parent” company policies and practices, in addition to the disclosed policies and practices for selected services. When looking at companies that have unique policies for different jurisdictions where they operate, we evaluate policies for the company’s home market. For example, we evaluate Meta’s privacy policy that is applicable to users in the U.S., because Meta is headquartered in the U.S.

How do we select companies for evaluation?

We consider three main criteria in selecting companies for evaluation:

  • User base: The telecommunications companies in the RDR Index have a substantial user base in their home markets and beyond, and the digital platforms have a large number of global users. The policies and practices of these companies affect billions of people worldwide.
  • Geographic reach and distribution: Headquartered in Africa, Asia, the Middle East, Europe, and North America, these companies all operate globally. Many of them offer services in almost every country in the world.
  • Relevance to users’ freedom of expression and privacy rights: All of these companies provide services that have become vital to the exercise and protection of human rights, including rights to freedom of expression and privacy. The companies we evaluate operate or have a significant user bases in countries where human rights are not universally respected.

Our methodology and indicators have been through multiple cycles of revision, which are documented in the Methodology Development Archive below. Today, we evaluate companies on 58 indicators in three main categories: Governance, freedom of expression and information, and privacy. Each category contains multiple indicators, each of which comprises a series of elements or questions, that enable researchers to evaluate company disclosure for that category.

Governance

Indicators in this category seek evidence that the company has robust governance processes in place to ensure that it respects the human rights to freedom of expression, information, and privacy. These rights are part of the Universal Declaration of Human Rights (UDHR), and are enshrined in the International Covenant on Civil and Political Rights (ICCPR). They apply online as well as offline. In order for a company to perform well in this category, the company’s disclosure should at least follow, and ideally surpass, the UN Guiding Principles on Business and Human Rights (UNGPs) and other industry-specific human rights standards focused on freedom of expression and privacy such as those adopted by the Global Network Initiative.

Indicators in this category look for companies to disclose:

  • Commitment to human rights: An explicit public commitment to freedom of expression and privacy as human rights, including an explicit commitment to protect human rights in their development and deployment of algorithmic systems (Indicator G1).
  • Governance management and oversight: Clear evidence of senior-level management and oversight over freedom of expression and privacy (Indicator G2).
  • Internal implementation: Employee training and whistleblower programs addressing these issues (Indicator G3).
  • Human rights due diligence to identify and mitigate the potential risks of companies’ products, services, and business operations (Indicator G4). Indicator G4 consists of a family of indicators that address whether companies provide evidence of conducting robust, systematic risk assessments of government regulations (Indicator G4a), policy enforcement (Indicator  G4b), targeted advertising policies and practices (Indicator G4c), algorithmic systems (Indicator G4d), and zero-rating (Indicator G4e).
  • Stakeholder engagement: Systematic and credible stakeholder engagement, ideally including membership in a multi-stakeholder organization committed to human rights principles, including freedom of expression and privacy (Indicator G5).
  • Remedy: Clear, predictable grievance mechanisms enabling users to notify companies when their freedom of expression and privacy rights have been affected or violated in connection with the companies’ business, plus evidence that the company provides appropriate responses or remedies (Indicator G6a). We also look for platforms to disclose similarly clear, predictable processes enabling users to appeal content moderation decisions (Indicator G6b).

Freedom of expression and information

Indicators in this category seek evidence that the company demonstrates it respects the right to freedom of expression and information, as articulated in the UDHR, the ICCPR, and other international human rights instruments. The company’s disclosed policies and practices demonstrate how it works to avoid contributing to actions that may interfere with this right, except where such actions are lawful, proportionate, and for a justified purpose. Companies that perform well on this indicator demonstrate a strong public commitment to transparency not only in terms of how they respond to government and others’ demands, but also in how they determine, communicate, and enforce private rules and commercial practices that affect users’ fundamental right to freedom of expression and information.

In this category, we evaluate:

  • Access to policies: We expect companies to ensure that policies affecting users’ freedom of expression and information rights are easy to find and understand. Terms of service or community guidelines that specify what types of content or activities are prohibited should be easy to access from a service’s main web page or app, be available in the primary languages of the company’s home market, and presented in an easy to read format. We also expect companies to clearly disclose if and how they directly notify users of changes to these policies (Indicators F1, F2).
  • Terms of service enforcement: We expect companies to clearly disclose what types of content and activities are prohibited and their processes for enforcing these rules (Indicator F3a). We also expect companies to publish data about the volume and nature of content and accounts they have removed or restricted for violations to their terms (Indicators F4a and F4b), and to disclose if they notify users when they have removed content, restricted a user’s account, or otherwise restricted access to content or a service (Indicator F8).
  • Ad content and ad targeting rules and enforcement: Companies that enable any type of advertising on their services or platforms should clearly disclose the rules for what types of ad content is prohibited (for example, ads that discriminate against individuals or groups based on personal attributes like age, religion, gender, and ethnicity) (Indicator F3b). Companies that enable advertisers and other third parties to target their users with tailored ads or content should have clear policies describing their ad targeting rules and what targeting parameters (like using certain types of audience categories, like age, location, or other user characteristics) are not permitted (Indicator F3c). Companies should also disclose their processes for identifying breaches to ad content and targeting rules and provide evidence that they are enforcing these policies by publishing data about content removed for violations (Indicator F4c).
  • Algorithmic system use and curation policies: Companies should also publish policies that clearly describe the terms for how they use algorithmic systems across their services and platforms (Indicator F1d). Similar to having terms of service policies or user agreements that outline the terms for what types of content or activities are prohibited, companies that use algorithmic systems should publish a clear and accessible policy stating the nature and functions of these systems, because of the potential of these systems to cause human rights harms. Companies should publish clear policies describing their use of algorithmic curation, recommendation, and ranking systems, including the variables that influence such systems (Indicator F12).
  • Government and private demands: We expect companies to clearly disclose their process for responding to government and private demands to restrict content and user accounts (Indicators F5a, F5b). We expect companies to produce data about the types of requests they receive and the number of these requests with which they comply (Indicators F6, F7). Private requests for content restrictions can come from a self-regulatory body such as the Internet Watch Foundation, or through a formal notice-and-takedown system, such as the U.S. Digital Millennium Copyright Act.
  • Identity policies: We expect companies to disclose whether they ask users to verify their identities using government-issued ID or other information tied to their offline identities (Indicator F11). The ability to communicate anonymously is important for the exercise and defense of human rights around the world. Requiring users to provide a company with identifying information presents human rights risks to those who, for example, voice opinions that do not align with a government’s views or who engage in activism that a government does not permit.
  • Network management and shutdowns (telcos only): Telecommunications companies can shut down a network, or block or slow down access to specific services on it. We expect companies to clearly disclose if they engage in practices that affect the flow of content through their networks, such as by throttling or traffic shaping (Indicator F9). We also expect companies to clearly disclose their policies and practices for handling government network shutdown demands (Indicator F10). We expect companies to explain the circumstances under which they might take such action and to report on the requests they receive and with which they comply.

Privacy

Companies demonstrate concrete ways in which they respect the right to privacy of users, as articulated in the UDHR, the ICCPR, and other international human rights instruments, with indicators in this category. The company’s disclosed policies and practices should demonstrate how it works to avoid contributing to actions that may interfere with users’ privacy, except where such actions are lawful, proportionate, and justified. They will also demonstrate a strong commitment to protect and defend users’ digital security. Companies that perform well on these indicators demonstrate a strong public commitment to transparency not only in terms of how they respond to government and private party demands, but also how they determine, communicate, and enforce private rules and commercial practices that affect users’ privacy.

In this category, we evaluate:

  • Collection and handling of user information: Companies should clearly disclose each type of user information they collect (P3a), infer (P3b), and share (P4), for what purposes (P5), and for how long they retain it (P6). We also expect companies to give users control over their own information, which should include options for users to control how their information is used for advertising purposes and development of algorithmic systems, and turning off targeted advertising by default (P7). Companies should also allow users to obtain all of the information a company holds on them (P8) and should clearly disclose if and how they track people across the web using cookies, widgets, or other tracking tools embedded on third-party websites (P9).
  • Security: We expect companies to clearly disclose internal measures they take to keep their products and services secure (Indicator P13), explain how they address security vulnerabilities when they are discovered (Indicator P14), and outline their policies for responding to data breaches (Indicator P15). We also expect companies to disclose that they encrypt user communications and private content (Indicator P16), that they enable features to help users keep their accounts secure (Indicator P17), and to publish materials educating users about how they can protect themselves from cybersecurity risks (Indicator P18).
  • Government and private demands: We expect companies to clearly disclose their process for responding to government and private demands to hand over user information (Indicators P10a, P10b). We expect companies to produce data about the types of requests they receive and the number of these requests with which they comply (Indicators P11a, P11b).

Research process and steps

RDR works with a network of international researchers who collect data on each company and evaluate company policies in the language of the company’s operating market. A list of our researchers can be found on our team page.

Our research process consists of seven steps of rigorous cross-checking and internal and external review:

  • Step 1: Data collection. A primary research team collects data for each company and provides a preliminary assessment of company performance across all indicators.
  • Step 2: Secondary review. A second team of researchers fact-checks the assessment provided by primary researchers in Step 1.
  • Step 3: Review and reconciliation. RDR research staff examine the results from Steps 1 and 2 and resolve any differences.
  • Step 4: First horizontal review. Research staff cross-check the indicators to ensure they have been evaluated consistently for each company.
  • Step 5: Company feedback. Initial results are sent to companies for comment and feedback. All feedback received from companies by our deadline is reviewed by RDR staff, who make decisions about score changes or adjustments.
  • Step 6: Second horizontal review. Research staff conduct a second horizontal review, cross-checking the indicators for consistency and quality control.
  • Step 7: Final scoring. The RDR team calculates final scores.

Evaluation and scoring

RDR Index scores are based on an evaluation of company disclosure on several levels—at the parent company level, the operating company level (for telecommunications companies), and the service level. This enables the research team to develop as complete an understanding as possible about how companies disclose or apply their policies.

For each research cycle, we set a cutoff date for active policies that we then capture, file, and evaluate. We notify companies of this date well in advance, and do not consider new information published by companies after that date.

Scoring

Companies receive an average score of their performance across all RDR Index indicators. Each indicator has a list of elements, and companies receive credit (full, partial, or no credit) for each element they fulfill. The evaluation includes an assessment of disclosure for every element of each indicator, based on one of the following possible answers:

  • “Yes”/ full disclosure: Company disclosure meets the element requirement.
  • “Partial”: Company disclosure has met some but not all aspects of the element, or the disclosure is not comprehensive enough to satisfy the full scope of the element.
  • “No disclosure found”: Researchers were unable to find information provided by the company on its website that answers the element question.
  • “No”: Company disclosure exists, but it specifically does not disclose to users what the element is asking. This is distinct from the option of “no disclosure found,” although both result in no credit.
  • “N/A”: Not applicable. This element does not apply to the company or service. Elements marked as N/A will not be counted for or against a company’s score

Points 

  • Yes/full disclosure = 100
  • Partial = 50
  • No = 0
  • No disclosure found = 0
  • N/A = excluded from score and averages

Company engagement

Proactive and open stakeholder engagement has been a critical component of RDR’s work and of the RDR Index methodology. We communicated with companies throughout the research process.

Open dialogue and communication. Before the research begins, we contact all companies we plan to evaluate and notify them of our plans, describing our research process and timeline. Following several stages of research and review, we share each company’s initial results with them. We invite companies to provide written feedback as well as additional source documents. In many cases, the research team conducts conference calls or meetings with companies to discuss the initial findings as well as broader questions about our methodology.

Incorporating company feedback into our findings. While engagement with the companies is critical to understanding their positions and ensuring we review relevant disclosures, we only evaluate information that companies disclose publicly. We do not consider a score change unless companies identify publicly available documentation that supports a change.

Methodology development archive

The original RDR Index methodology was developed over three years of research, testing, consultation, and revision, in consultation with more than 100 stakeholders across several phases. Since then, given the fast-changing nature of the technology sector, we have continued to expand and improve the RDR Index methodology.

 

Our methodology development and revision processes are described in detail in the documents listed below. You can also access the datasets from each version of the RDR Index since 2015 using the links below.

The original methodology development phase was carried out in 2013, in partnership with the University of Pennsylvania through an interdisciplinary, university-wide project called “New Technologies, Human Rights and Transparency.” It was hosted by the Center for Global Communication Studies at the Annenberg School for Communication in collaboration with students and faculty from the Wharton School of Business, Penn Law, Penn Engineering, and the School of Arts and Sciences.

Other former institutional partners: