2020 Indicators

The 2020 RDR Corporate Accountability Index ranks 26 companies according to 58 indicators evaluating company disclosure of policies and practices affecting freedom of expression and privacy.

Company scores and accompanying analysis have been generated through a rigorous process including peer review, company feedback, and quality control.

The data produced by the RDR Index informs the work of human rights advocates, policymakers, and responsible investors. It can also help companies improve their own policies and practices.

Download a PDF of the 2020 indicators with definitions and research guidance, or explore them online below:

G: Governance

G1. Policy Commitment
G2. Governance and management oversight
G3. Internal implementation
G4. Human rights due diligence
G4(a). Impact assessment: Governments and regulations
G4(b). Impact assessment: Processes for policy enforcement
G4(c) Impact assessment: Targeted advertising
G4(d). Impact assessment: Algorithmic systems
G4(e) Impact assessment: Zero-rating
G5. Stakeholder engagement and accountability
G6. Remedy and appeals
G6(a). Remedy
G6(b). Process for content moderation and appeals

F: Freedom of Expression

F1. Access to policies
F1(a). Access to terms of service
F1(b). Access to advertising content policies
F1(c). Access to advertising targeting policies
F1(d). Access to algorithmic system use policies
F2. Notification of policy changes
F2(a). Changes to terms of service
F2(b). Changes to advertising content policies
F2(c). Changes to advertising targeting policies
F2(d). Changes to algorithmic system use policies
F3. Process for policy enforcement
F3(a). Process for terms of service enforcement
F3(b). Advertising content rules and enforcement
F3(c). Advertising targeting rules and enforcement
F4. Data about policy enforcement
F4(a). Data about content restrictions to enforce terms of service
F4(b). Data about account restrictions to enforce terms of service
F4(c). Data about advertising content and advertising targeting policy enforcement
F5. Process for responding to third-party requests to restrict content or accounts
F5(a). Process for responding to government demands to restrict content or accounts
F5(b). Process for responding to private requests for content or account restriction
F6. Data about government demands to restrict for content and accounts
F7. Data about private requests for content or account restriction
F8. User notification about content and account restriction
F9. Network management (telecommunications companies)
F10. Network shutdown (telecommunications companies)
F11. Identity policy
F12. Algorithmic content curation, recommendation, and/or ranking systems
F13. Automated software agents (“bots”)

P: Privacy

P1. Access to privacy policies
P1(a). Access to privacy policies
P1(b). Access to algorithmic system development policies
P2. Notification of changes
P2(a). Changes to privacy policies
P2(b). Changes to algorithmic system development policies
P3. User information collection and inference
P3(a). Collection of user information
P3(b). Inference of user information
P4. Sharing of user information
P5. Purpose for collecting, inferring, and sharing user information
P6. Retention of user information
P7. Users’ control over their own user information
P8. Users’ access to their own user information
P9. Collection of user information from third parties
P10. Process for responding to demands for user information
P10(a). Process for responding to government demands for user information
P10(b). Process for responding to private requests for user information
P11. Data about demands for user information
P11(a). Data about government demands for user information
P11(b). Data about private requests for user information
P12. User notification about third-party requests for user information
P13. Security oversight
P14. Addressing security vulnerabilities
P15. Data breaches
P16. Encryption of user communication and private content (digital platforms)
P17. Account Security (digital platforms)
P18. Inform and educate users about potential risks

G: Governance

Indicators in this category seek evidence that the company has governance processes in place to ensure that it respects the human rights to freedom of expression and privacy. Both rights are part of the Universal Declaration of Human Rights, and are enshrined in the International Covenant on Civil and Political Rights. They apply online as well as offline. In order for a company to perform well in this category, the company’s disclosure should at least follow, and ideally surpass, the UN Guiding Principles on Business and Human Rights and other industry-specific human rights standards focused on freedom of expression and privacy such as those adopted by the Global Network Initiative.

G1. Policy Commitment

The company should publish a formal policy commitment to respect users’ human rights to freedom of expression and information and privacy.

Elements:

    1. Does the company make an explicit, clearly articulated policy commitment to human rights, including to freedom of expression and information?
    2. Does the company make an explicit, clearly articulated policy commitment to human rights, including to privacy?
    3. Does the company disclose an explicit, clearly articulated policy commitment to human rights in its development and use of algorithmic systems?
view research guidance

top of section

G2. Governance and management oversight

The company’s senior leadership should exercise oversight over how its policies and practices affect freedom of expression and information, and privacy.

Elements:

    1. Does the company clearly disclose that the board of directors exercises formal oversight over how company practices affect freedom of expression and information?
    2. Does the company clearly disclose that the board of directors exercises formal oversight over how company practices affect privacy?
    3. Does the company clearly disclose that an executive-level committee, team, program or officer oversees how company practices affect freedom of expression and information?
    4. Does the company clearly disclose that an executive-level committee, team, program or officer oversees how company practices affect privacy?
    5. Does the company clearly disclose that a management-level committee, team, program or officer oversees how company practices affect freedom of expression and information?
    6. Does the company clearly disclose that a management-level committee, team, program or officer oversees how company practices affect privacy?
view research guidance

top of section

G3. Internal implementation

The company should have mechanisms in place to implement its commitments to freedom of expression and information and privacy within the company.
Elements:

  1. Does the company clearly disclose that it provides employee training on freedom of expression and information issues?
  2. Does the company clearly disclose that it provides employee training on privacy issues?
  3. Does the company clearly disclose that it maintains an employee whistleblower program through which employees can report concerns related to how the company treats its users’ freedom of expression and information rights?
  4. Does the company clearly disclose that it maintains an employee whistleblower program through which employees can report concerns related to how the company treats its users’ privacy rights?
view research guidance

top of section

G4: Human rights due diligence

G4(a). Impact assessment: Governments and regulations

Companies should conduct regular, comprehensive, and credible due diligence, through robust human rights impact assessments, to identify how government regulations and policies affect freedom of expression and information and privacy, and to mitigate any risks posed by those impacts in the jurisdictions in which it operates.

Elements:

    1. Does the company assess how laws affect freedom of expression and information in jurisdictions where it operates?
    2. Does the company assess how laws affect privacy in jurisdictions where it operates?
    3. Does the company assess freedom of expression and information risks associated with existing products and services in jurisdictions where it operates?
    4. Does the company assess privacy risks associated with existing products and services in jurisdictions where it operates?
    5. Does the company assess freedom of expression and information risks associated with a new activity, including the launch and/or acquisition of new products, services, or companies, or entry into new markets or jurisdictions?
    6. Does the company assess privacy risks associated with a new activity, including the launch and/or acquisition of new products, services, or companies, or entry into new markets or jurisdictions?
    7. Does the company conduct additional evaluation whenever the company’s risk assessments identify concerns?
    8. Do senior executives and/or members of the company’s board of directors review and consider the results of assessments and due diligence in their decision-making?
    9. Does the company conduct assessments on a regular schedule?
    10. Are the company’s assessments assured by an external third party?
view research guidance

top of section

G4(b). Impact assessment: Processes for policy enforcement

The company should conduct regular, comprehensive, and credible due diligence, such as through robust human rights impact assessments, to identify how its processes for policy enforcement affect users’ fundamental rights to freedom of expression and information, to privacy, and to non-discrimination, and to mitigate any risks posed by those impacts.

Elements:

    1. Does the company assess freedom of expression and information risks of enforcing its terms of service?
    2. Does the company conduct risk assessments of its enforcement of its privacy policies?
    3. Does the company assess discrimination risks associated with its processes for enforcing its terms of service?
    4. Does the company assess discrimination risks associated with its processes for enforcing its privacy policies?
    5. Does the company conduct additional evaluation whenever the company’s risk assessments identify concerns?
    6. Do senior executives and/or members of the company’s board of directors review and consider the results of assessments and due diligence in their decision-making?
    7. Does the company conduct assessments on a regular schedule?
    8. Are the company’s assessments assured by an external third party?
    9. Is the external third party that assures the assessments accredited to a relevant and reputable human rights standard by a credible organization?
view research guidance

top of section

G4(c) Impact assessment: Targeted advertising

The company should conduct regular, comprehensive, and credible due diligence, such as through robust human rights impact assessments, to identify how all aspects of its targeted advertising policies and practices affect users’ fundamental rights to freedom of expression and information, to privacy, and to non-discrimination, and to mitigate any risks posed by those impacts.

Elements:

    1. Does the company assess freedom of expression and information risks associated with its targeted advertising policies and practices?
    2. Does the company assess privacy risks associated with its targeted advertising policies and practices?
    3. Does the company assess discrimination risks associated with its targeted advertisingpolicies and practices?
    4. Does the company conduct additional evaluation whenever the company’s risk assessments identify concerns?
    5. Do senior executives and/or members of the company’s board of directors review and consider the results of assessments and due diligence in their decision-making?
    6. Does the company conduct an assessments on a regular schedule?
    7. Are the company’s assessments assured by an external third party?
    8. Is the external third party that assures the assessment accredited to a relevant and reputable human rights standard by a credible organization?
view research guidance

top of section

G4(d). Impact assessment: Algorithmic systems

The company should conduct regular, comprehensive, and credible due diligence, such as through robust human rights impact assessments, to identify how all aspects of its policies and practices related to the development and use of algorithmic systems affect users’ fundamental rights to freedom of expression and information, to privacy, and to non-discrimination, and to mitigate any risks posed by those impacts.

Elements:

    1. Does the company assess freedom of expression and information risks associated with its development and use of algorithmic systems?
    2. Does the company assess privacy risks associated with its development and use of algorithmic systems?
    3. Does the company assess discrimination risks associated with its development and use of algorithmic systems?
    4. Does the company conduct additional evaluation whenever the company’s risk assessments identify concerns?
    5. Do senior executives and/or members of the company’s board of directors review and consider the results of assessments and due diligence in their decision-making?
    6. Does the company conduct assessments on a regular schedule?
    7. Are the company’s assessments assured by an external third party?
    8. Is the external third party that assures the assessment accredited to a relevant and reputable human rights standard by a credible organization?
view research guidance

top of section

G4(e) Impact assessment: Zero-rating

If the company engages in zero-rating, it should conduct regular, comprehensive, and credible due diligence, such as through robust human rights impact assessments, to identify how all aspects of its zero-rating policies and practices affect users’ fundamental rights to freedom of expression and information, to privacy, and to freedom from discrimination, and to mitigate any risks posed by those impacts.

Elements:

    1. Does the company assess freedom of expression and information risks associated with its zero-rating programs?
    2. Does the company assess privacy risks associated with its zero-rating programs?
    3. Does the company assess discrimination risks associated with its zero-rating programs?
    4. Does the company conduct additional evaluation wherever the company’s risk assessments identify concerns?
    5. Do senior executives and/or members of the company’s board of directors review and consider the results of assessments and due diligence in their decision-making?
    6. Does the company conduct assessments on a regular schedule?
    7. Are the company’s assessments assured by an external third party?
    8. Is the external third party that assures the assessment accredited to a relevant and reputable human rights standard by a credible organization?
view research guidance

top of section

G5. Stakeholder engagement and accountability

The company should engage with a range of stakeholders on the company’s impact on freedom of expression and information, privacy, and potential risks of related human rights harms such as discrimination.

Elements:

    1. Is the company a member of one or more multi-stakeholder initiatives that address the full range of ways in which users’ fundamental rights to freedom of expression and information, privacy, and non-discrimination may be affected in the course of the company’s operations?
    2. If the company is not a member of one or more such multi-stakeholder initiatives, is the company a member of any organizations that engages systematically and on a regular basis with non-industry and non-governmental stakeholders on freedom of expression and privacy issues?
    3. If the company is not a member of one of these organizations, does the company disclose that it initiates or participates in meetings with stakeholders that represent, advocate on behalf of, or are people whose rights to freedom of expression and information and to privacy are directly impacted by the company’s business?
view research guidance

top of section

G6. Remedy and appeals

G6(a). Remedy

The company should have clear and predictable grievance and remedy mechanisms to address users’ freedom of expression and privacy concerns.

Elements:

    1. Does the company clearly disclose​ it has a grievance mechanism(s) enabling users to submit complaints if they feel their freedom of expression and information rights have been adversely affected by the company’s policies or practices?
    2. Does the company clearly disclose​ it has a grievance mechanism(s) enabling users to submit complaints if they feel their privacy has been adversely affected by the company’s policies or practices?
    3. Does the company clearly disclose​ its procedures for providing remedy for freedom of expression and information-related grievances?
    4. Does the company clearly disclose​ its procedures for providing remedy for privacy-related grievances?
    5. Does the company clearly disclose​ timeframes for its grievance and remedy procedures?
    6. Does the company clearly disclose the number of complaints received related to freedom of expression?
    7. Does the company clearly disclose the number of complaints received related to privacy?
    8. Does the company clearly disclose​ evidence that it is providing remedy for freedom of expression grievances?
    9. Does the company clearly disclose​ evidence that it is providing remedy for privacy grievances?
view research guidance

top of section

G6(b). Process for content moderation appeals

The company should offer users clear and predictable appeals mechanisms and processes for appealing content-moderation actions.

Elements:

    1. Does the company clearly disclose​ that it offers affected users the ability to appeal content-moderation actions?
    2. Does the company clearly disclose​ that it notifies the users who are affected by a content-moderation action?
    3. Does the company clearly disclose​e a timeframe for notifying affected users when it takes a content-moderation action?
    4. Does the company clearly disclose​ when appeals are not permitted?
    5. Does the company clearly disclose​ its process for reviewing appeals?
    6. Does the company clearly disclose​ its timeframe for reviewing appeals?
    7. Does the company clearly disclose​ that such appeals are reviewed by at least one human not involved in the original content-moderation action?
    8. Does the company clearly disclose​ what role automation plays in reviewing appeals?
    9. Does the company clearly disclose​ that the affected users have an opportunity to present additional information that will be considered in the review?
    10. Does the company clearly disclose​ that it provides the affected users with a statement outlining the reason for its decision?
    11. Does the company clearly disclose​ evidence that it is addressing content moderation appeals?
view research guidance

top of section

F: Freedom of Expression and Information

Indicators in this category seek evidence that the company demonstrates it respects the right to freedom of expression and information, as articulated in the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, and other international human rights instruments. The company’s disclosed policies and practices demonstrate how it works to avoid contributing to actions that may interfere with this right, except where such actions are lawful, proportionate, and for a justifiable purpose. Companies that perform well on this indicator demonstrate a strong public commitment to transparency not only in terms of how they respond to government and others’ demands, but also how they determine, communicate, and enforce private rules and commercial practices that affect users’ fundamental right to freedom of expression and information.

F1: Access to policies

F1(a). Access to terms of service

The company should offer terms of service that are easy to find and easy to understand.

Elements:

    1. Are the company’s terms of service easy to find?
    2. Are the terms of service available in the primary language(s) spoken by users in the company’s home jurisdiction?
    3. Are the terms of service presented in an understandable manner?
view research guidance

top of section

F1(b). Access to advertising content policies

The company should offer advertising content policies that are easy to find and easy to understand.

Elements:

    1. Are the company’s advertising content policies easy to find?
    2. Are the company’s advertising content policies available in the primary language(s) spoken by users in the company’s home jurisdiction?
    3. Are the company’s advertising content policies presented in an understandable manner?
    4. (For mobile ecosystems): Does the company clearly disclose that it requires apps made available through its app store to provide users with an advertising content policy?
    5. (For personal digital assistant ecosystems): Does the company clearly disclose that it requires skills made available through its skill store to provide users with an advertising content policy?
view research guidance

top of section

F1(c). Access to advertising targeting policies

The company should offer advertising targeting policies that are easy to find and easy to understand.

Elements:

    1. Are the company’s advertising targeting policies easy to find?
    2. Are the advertising targeting policies available in the primary language(s) spoken by users in the company’s home jurisdiction?
    3. Are the advertising targeting policies presented in an understandable manner?
    4. (For mobile ecosystems): Does the company clearly disclose that it requires apps made available through its app store to provide users with an advertising targeting policy?
    5. (For personal digital assistant ecosystems): Does the company clearly disclose that it requires skills made available through its skill store to provide users with an advertising targeting policy?
view research guidance

top of section

F1(d). Access to algorithmic system use policies

The company should offer policies related to their use of algorithms that are easy for users to find and understand.

Elements:

    1. Are the company’s algorithmic system use policies easy to find?
    2. Are the algorithmic system use policies available in the primary language(s) spoken by users in the company’s home jurisdiction?
    3. Are the algorithmic system use policies presented in an understandable manner?
view research guidance

top of section

F2: Notification of policy changes

F2(a). Changes to terms of service

The company should clearly disclose that it directly notifies users when it changes its terms of service, prior to these changes coming into effect.

Elements:

    1. Does the company clearly disclose that it directly notifies users about all changes to its terms of service?
    2. Does the company clearly disclose how it will directly notify users of changes?
    3. Does the company clearly disclose the timeframe within which it directly notifies users of changes prior to these changes coming into effect?
    4. Does the company maintain a public archive or change log?
view research guidance

top of section

F2(b). Changes to advertising content policies

The company should clearly disclose that it directly notifies users when it changes its advertising content policies, prior to these changes coming into effect.

Elements:

    1. Does the company clearly disclose that it directly notifies users about changes to its advertising content policies?
    2. Does the company clearly disclose how it will directly notify users of changes?
    3. Does the company clearly disclose the timeframe within which it directly notifies users of changes prior to these changes coming into effect?
    4. Does the company maintain a public archive or change log?
    5. (For mobile ecosystems): Does the company clearly disclose that it requires apps made available through its app store to notify users when the apps change their advertising content policies?
    6. (For personal digital assistant ecosystems): Does the company clearly disclose that it requires skills made available through its skill store to notify users when the skills change their advertising content policies?
view research guidance

top of section

F2(c). Changes to advertising targeting policies

The company should clearly disclose that it directly notifies users when it changes its advertising targeting policies, prior to these changes coming into effect.

Elements:

    1. Does the company clearly disclose that it directly notifies users about changes to its advertising targeting policies?
    2. Does the company clearly disclose how it will directly notify users of changes?
    3. Does the company clearly disclose the timeframe within which it directly notifies users of changes prior to these changes coming into effect?
    4. Does the company maintain a public archive or change log?
    5. (For mobile ecosystems): Does the company clearly disclose that it requires apps made available through its app store to directly notify users when the apps change their advertising targeting policies?
    6. (For personal digital assistant ecosystems): Does the company clearly disclose that it requires skills made available through its skill store to notify users when the skills change their advertising targeting policies?
view research guidance

top of section

F2(d). Changes to algorithmic system use policies

The company should clearly disclose that it directly notifies users when it changes its algorithmic system use policies, prior to these changes coming into effect.

Elements:

    1. Does the company clearly disclose that it directly notifies users about changes to its algorithmic system use policies?
    2. Does the company clearly disclose how it will directly notify users of changes?
    3. Does the company clearly disclose the timeframe within which it directly notifies users of changes prior to these changes coming into effect?
    4. Does the company maintain a public archive or change log?
view research guidance

top of section

F3: Process for policy enforcement

F3(a). Process for terms of service enforcement

The company should clearly disclose the circumstances under which it may restrict content or user accounts.

Elements:

    1. Does the company clearly disclose what types of content or activities it does not permit?
    2. Does the company clearly disclose why it may restrict a user’s account?
    3. Does the company clearly disclose information about the processes it uses to identify content or accounts that violate the company’s rules?
    4. Does the company clearly disclose how it uses algorithmic systems to flag content that might violate the company’s rules?
    5. Does the company clearly disclose whether any government authorities receive priority consideration when flagging content to be restricted for violating the company’s rules?
    6. Does the company clearly disclose whether any private entities receive priority consideration when flagging content to be restricted for violating the company’s rules?
    7. Does the company clearly disclose its process for enforcing its rules once violations are detected
view research guidance

top of section

F3(b). Advertising content rules and enforcement

The company should clearly disclose its policies governing what types of advertising content is prohibited.

Elements:

    1. Does the company clearly disclose what types of advertising content it does not permit?
    2. Does the company clearly disclose whether it requires all advertising content be clearly labelled as such?
    3. Does the company clearly disclose the processes and technologies it uses to identify advertising content or accounts that violate the company’s rules?
view research guidance

top of section

F3(c). Advertising targeting rules and enforcement

The company should clearly disclose its policies governing what type of advertising targeting is prohibited.

Elements:

    1. Does the company clearly disclose whether it enables third parties to target its users with advertising content?
    2. Does the company clearly disclose what types of targeting parameters are not permitted?
    3. Does the company clearly disclose that it does not permit advertisers to target specific individuals?
    4. Does the company clearly disclose that algorithmically generated advertising audience categories are evaluated by human reviewers before they can be used?
    5. Does the company clearly disclose information about the processes and technologies it uses to identify advertising content or accounts that violate the company’s rules?
view research guidance

top of section

F4: Data about policy enforcement

F4(a). Data about content restrictions to enforce terms of service

The company should clearly disclose and regularly publish data about the volume and nature of actions taken to restrict content that violates the company’s rules.

Elements:

    1. Does the company publish data about the total number of pieces of content restricted for violating the company’s rules?
    2. Does the company publish data on the number of pieces of content restricted based on which rule was violated?
    3. Does the company publish data on the number of pieces of content it restricted based on the format of content? (e.g. text, image, video, live video)?
    4. Does the company publish data on the number of pieces of content it restricted based on the method used to identify the violation?
    5. Does the company publish this data at least four times a year?
    6. Can the data be exported as a structured data file?
view research guidance

top of section

F4(b). Data about account restrictions to enforce terms of service

The company should clearly disclose and regularly publish data about the volume and nature of actions taken to restrict accounts that violate the company’s rules.

Elements:

    1. Does the company publish data on the total number of accounts restricted for violating the company’s own rules?
    2. Does the company publish data on the number of accounts restricted based on which rule was violated?
    3. Does the company publish data on the number of accounts restricted based on the method used to identify the violation?
    4. Does the company publish this data at least four times a year?
    5. Can the data be exported as a structured data file?
view research guidance

top of section

F4(c). Data about advertising content and advertising targeting policy enforcement

The company should clearly disclose and regularly publish data about the volume and nature of actions taken to restrict advertising content that violates the company’s advertising content policies and advertising targeting policies.

Elements:

    1. Does the company publish the number of advertisements it restricted to enforce its advertising content policies?
    2. Does the company publish the number of advertisements it restricted based on which advertising content rule was violated?
    3. Does the company publish the total number of advertisements it restricted to enforce its advertising targeting policies?
    4. Does the company publish the number of advertisements it restricted based on which advertising targeting rule was violated?
    5. Does the company publish this data at least once a year?
    6. Can the data be exported as a structured data?
view research guidance

top of section

F5: Process for responding to third-party requests to restrict content or accounts

F5(a). Process for responding to government demands to restrict content or accounts

The company should clearly disclose its process for responding to government demands (including judicial orders) to remove, filter, or restrict content or accounts.

Elements:

    1. Does the company clearly disclose its process for responding to non-judicial government demands?
    2. Does the company clearly disclose its process for responding to court orders?
    3. Does the company clearly disclose its process for responding to government demands from foreign jurisdictions?
    4. Do the company’s explanations clearly disclose the legal basis under which it may comply with government demands?
    5. Does the company clearly disclose that it carries out due diligence on government demands before deciding how to respond?
    6. Does the company commit to push back on inappropriate or overbroad demands made by governments?
    7. Does the company provide clear guidance or examples of implementation of its process of responding to government demands?
view research guidance

top of section

F5(b). Process for responding to private requests for content or account restriction

The company should clearly disclose its process for responding to requests to remove, filter, or restrict content or accounts that come through private processes.

Elements:

    1. Does the company clearly disclose its process for responding to requests to remove, filter, or restrict content or accounts made through private processes?
    2. Do the company’s explanations clearly disclose the basis under which it may comply with requests made through private processes?
    3. Does the company clearly disclose that it carries out due diligence on requests made through private processes before deciding how to respond?
    4. Does the company commit to push back on inappropriate or overbroad requests made through private processes?
    5. Does the company provide clear guidance or examples of implementation of its process of responding to requests made through private processes?
view research guidance

top of section

F6. Data about government demands to restrict for content and accounts

The company should regularly publish data about government demands (including judicial orders) to remove, filter, or restrict content and accounts.

Elements:

      1. Does the company break out the number of government demands it receives by country?
      2. Does the company list the number of accounts affected?
      3. Does the company list the number of pieces of content or URLs affected?
      4. Does the company list the types of subject matter associated with the government demands it receives?
      5. Does the company list the number of government demands that come from different legal authorities?
      6. Does the company list the number of government demands it knowingly receives from government officials to restrict content or accounts through unofficial processes?
      7. Does the company list the number of government demands with which it complied?
      8. Does the company publish the original government demands or disclose that it provides copies to a public third-party archive?
      9. Does the company report this data at least once a year?

Can the data be exported as a structured data file?

view research guidance

top of section

F7. Data about private requests for content or account restriction

The company should regularly publish data about requests to remove, filter, or restrict access to content or accounts that come through private processes.

Elements:

    1. Does the company break out the number of requests to restrict content or accounts that it receives through private processes?
    2. Does the company list the number of accounts affected?
    3. Does the company list the number of pieces of content or URLs affected?
    4. Does the company list the reasons for removal associated with the requests it receives?
    5. Does the company clearly disclose the private processes that made requests?
    6. Does the company list the number of requests it complied with?
    7. Does the company publish the original requests or disclose that it provides copies to a public third-party archive?
    8. Does the company report this data at least once a year?
    9. Can the data be exported as a structured data file?
    10. Does the company clearly disclose that its reporting covers all types of requests that it receives through private processes?
view research guidance

top of section

F8. User notification about content and account restriction

The company should clearly disclose that it notifies users when it restricts content or accounts.

Elements:

    1. If the company hosts user-generated content, does the company clearly disclose that it notifies users who generated the content when it is restricted?
    2. Does the company clearly disclose that it notifies users who attempt to access content that has been restricted?
    3. In its notification, does the company clearly disclose a reason for the content restriction (legal or otherwise)?
    4. Does the company clearly disclose that it notifies users when it restricts their accounts?
view research guidance

top of section

F9. Network management (telecommunications companies)

The company should clearly disclose that it does not prioritize, block, or delay certain types of traffic, applications, protocols, or content for any reason beyond assuring quality of service and reliability of the network.

Elements:

    1. Does the company clearly disclose a policy commitment to not prioritize, block, or delay certain types of traffic, applications, protocols, or content for reasons beyond assuring quality of service and reliability of the network?
    2. Does the company engage in practices, such as offering zero-rating programs, that prioritize network traffic for reasons beyond assuring quality of service and reliability of the network?
    3. If the company does engage in network prioritization practices for reasons beyond assuring quality of service and reliability of the network, does it clearly disclose its purpose for doing so?
view research guidance

top of section

F10. Network shutdown (telecommunications companies)

The company should clearly disclose the circumstances under which it may shut down or restrict access to the network or to specific protocols, services, or applications on the network.

Elements:

    1. Does the company clearly disclose the reason(s) why it may shut down service to a particular area or group of users?
    2. Does the company clearly disclose why it may restrict access to specific applications or protocols (e.g., VoIP, messaging) in a particular area or to a specific group of users?
    3. Does the company clearly disclose its process for responding to government demands to shut down a network or restrict access to a service?
    4. Does the company clearly disclose a commitment to push back on government demands to shut down a network or restrict access to a service?
    5. Does the company clearly disclose that it notifies users directly when it shuts down a network or restricts access to a service?
    6. Does the company clearly disclose the number of network shutdown demands it receives?
    7. Does the company clearly disclose the specific legal authority that makes the demands?
    8. Does the company clearly disclose the number of government demands with which it complied?
view research guidance

top of section

F11. Identity policy

The company should not require users to verify their identity with their government-issued identification, or other forms of identification that could be connected to their offline identity.

Does the company require users to verify their identity with their government-issued identification, or with other forms of identification that could be connected to their offline identity?

view research guidance

top of section

F12. Algorithmic content curation, recommendation, and/or ranking systems

Companies should clearly disclose how users’ online content is curated, ranked, or recommended.

Elements:

    1. Does the company clearly disclose whether it uses algorithmic systems to curate, recommend, and/or rank the content that users can access through its platform?
    2. Does the company clearly disclosee how the algorithmic systems are deployed to curate, recommend, and/or rank content, including the variables that influence these systems?
    3. Does the company clearly disclose what options users have to control the variables that the algorithmic content curation, recommendation, and/or ranking system takes into account?
    4. Does the company clearly disclose whether algorithmic systems are used to automatically curate, recommend, and/or rank content by default?
    5. Does the company clearly disclose that users can opt in to automated content curation, recommendation, and/or ranking systems?
view research guidance

top of section

F13. Automated software agents (“bots”)

Companies should clearly disclose policies governing the use of automated software agents (“bots”) on their platforms, products and services, and how they enforce such policies.

Elements:

    1. Does the company clearly disclose rules governing the use of bots on its platform?
    2. Does the company clearly disclose that it requires users to clearly label all content and accounts that are produced, disseminated or operated with the assistance of a bot?
    3. Does the company clearly disclose its process for enforcing its bot policy?
    4. Does the company clearly disclose data on the volume and nature of user content and account restricted for violating the company’s bot policy?
view research guidance

top of section

P: Privacy

Indicators in this category seek evidence that in its disclosed policies and practices, the company demonstrates concrete ways in which it respects the right to privacy of users, as articulated in the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, and other international human rights instruments. The company’s disclosed policies and practices demonstrate how it works to avoid contributing to actions that may interfere with users’ privacy, except where such actions are lawful, proportionate, and for a justifiable purpose. They will also demonstrate a strong commitment to protect and defend users’ digital security. Companies that perform well on these indicators demonstrate a strong public commitment to transparency not only in terms of how they respond to government and others’ demands, but also how they determine, communicate, and enforce private rules and commercial practices that affect users’ privacy.

P1: Access to policies affecting users’ privacy

P1(a). Access to privacy policies

The company should offer privacy policies that are easy to find and easy to understand.

Elements:

    1. Are the company’s privacy policies easy to find?
    2. Are the privacy policies available in the primary language(s) spoken by users in the company’s home jurisdiction?
    3. Are the policies presented in an understandable manner?
    4. (For mobile ecosystems): Does the company disclose that it requires apps made available through its app store to provide users with a privacy policy?
    5. (For personal digital assistant ecosystems): Does the company disclose that it requires skills made available through its skill store to provide users with a privacy policy?
view research guidance

top of section

P1(b). Access to algorithmic system development policies

The company should offeralgorithmic system development policies that are easy to find and easy to understand.

Elements:

    1. Are the company’s algorithmic system development policies easy to find?
    2. Are the algorithmic system development policies available in the primary language(s) spoken by users?
    3. Are the algorithmic system development policies presented in an understandable manner?
view research guidance

top of section

P2: Notification of changes

P2(a). Changes to privacy policies

The company should clearly disclose that it directly notifies users when it changes its privacy policies, prior to these changes coming into effect.

Elements:

    1. Does the company clearly disclose that it directly
      notifies users about all changes to its privacy policies?
    2. Does the company clearly disclose how it will directly notify users of changes?
    3. Does the company clearly disclose the timeframe within which it directly notifiesusers of changes prior to these changes coming into effect?
    4. Does the company maintain a public archive or change log?
    5. (For mobile ecosystems): Does the company clearly disclose that it requires apps sold through its app store to notify users when the app changes its privacy policy?
    6. (For personal digital assistant ecosystems): Does the company clearly disclose that it requires skills
      sold through its skill store to notify users when the skill changes its privacy policy?
view research guidance

top of section

P2(b). Changes to algorithmic system development policies

The company should clearly disclose that it directly notifies users when it changes itsalgorithmic system development policies, prior to these changes coming into effect.

Elements:

    1. Does the company clearly disclose that it directly notifies users about all changes to its algorithmic system development policies?
    2. Does the company clearly disclose how it will directly notify users of changes?
    3. Does the company clearly disclose the time frame within which it directly notifies users of changes prior to these changes coming into effect?
    4. Does the company maintain a public archive or change log?
view research guidance

top of section

P3: User information collection and inference

P3(a). Collection of user information

The company should clearly disclose what user information it collects and how.

Elements:

    1. Does the company clearly disclose what types of user information it collects?
    2. For each type of user information the company collects, does the company clearly disclose how it collects that user information?
    3. Does the company clearly disclose that it limits collection of user information to what is directly relevant and necessary to accomplish the purpose of its service?
    4. (For mobile ecosystems): Does the company clearly disclose that it evaluates whether the privacy policies of third-party appsmade available through its app store disclose what user information the apps collect?
    5. (For mobile ecosystems): Does the company clearly disclose that it evaluates whether third-party apps made available through its app store limit collection of user information to what is directly relevant and necessary to accomplish the purpose of the app?
    6. (For personal digital assistant ecosystems): Does the company clearly disclose that it evaluates whether the privacy policies of third-party skills made available through its skill store disclose what user information the skills collect?
    7. (For personal digital assistant ecosystems): Does the company clearly disclose that it evaluates whether third-party skills made available through its skill store limit collection of user information to what is directly relevant and necessary to accomplish the purpose of the skill?
view research guidance

top of section

P3(b). Inference of user information

The company should clearly disclose what user information it infers and how.

Elements:

    1. Does the company clearly disclose all the types of user information it infers on the basis of collected user information?
    2. For each type of user information the company infers, does the company clearly disclose how it infers that user information?
    3. Does the company clearly disclose that it limits inference of user information to what is directly relevant and necessary to accomplish the purpose of its service?
view research guidance

top of section

P4. Sharing of user information

The company should clearly disclose what user information it shares and with whom.

Elements:

    1. For each type of user information the company collects, does the company
      clearly disclose whether it shares that user information?
    2. For each type of user information the company shares, does the company clearly disclose the types of third parties with which it shares that user information?
    3. Does the company clearly disclose that it may share
      user information with government(s) or legal authorities?
    4. For each type of user information the company shares, does the company clearly disclose the names of all third parties with which it shares user information?
    5. (For mobile ecosystems): Does the company clearly disclose that it evaluates whether the privacy policies of third party apps made available through its
      app store disclose what user information the apps share?
    6. (For mobile ecosystems): Does the company clearly disclose that it evaluates whether the privacy policies of third party appsmade available through its
      app store disclose the types of third parties
      with whom they shareuser information?
    7. (For personal digital assistant ecosystems): Does the company clearly disclose that it evaluates whether the privacy policies of third party skills made available through its skill store disclose what user information the skills share?
    8. (For personal digital assistant ecosystems): Does the company clearly disclose that it evaluates whether the privacy policies of third party skills made available through its skill store disclose the types of third parties with whom they shareuser information?
view research guidance

top of section

P5. Purpose for collecting, inferring, and sharing user information

The company should clearly disclose why it collects, infers, and shares user information.

Elements:

    1. For each type of user information the company collects, does the company clearly disclose
      its purpose for collection?
    2. For each type of user information the company infers, does the company clearly disclose its purpose for the inference?
    3. Does the company clearly disclose whether it combines user information from various company services and if so, why?
    4. For each type of user information the company shares, does the company clearly disclose its purpose for sharing?
    5. Does the company clearly disclose that it limits its use of user information to the purpose for which it was collected or inferred?
view research guidance

top of section

P6. Retention of user information

The company should clearly disclose how long it retains user information.

Elements:

    1. For each type of user information the company collects, does the company
      clearly disclose how long it retains that user information?
    2. Does the company clearly disclose what de-identified user information it retains?
    3. Does the company clearly disclose the process for de-identifying user information?
    4. Does the company clearly disclose that it deletes all user information after users terminate their account?
    5. Does the company clearly disclose the time frame in which it will delete
      user information after users terminate their account?
    6. (For mobile ecosystems): Does the company clearly disclose that it evaluates whether the privacy policies of third-partyapps made available through its app store disclose how long they retain user information?
    7. (For mobile ecosystems): Does the company clearly disclose that it evaluates whether the privacy policies of third-party apps made available through its app store state that all user information is deleted when users terminate their accounts or delete the app?
    8. (For personal digital assistant ecosystems): Does the company clearly disclose that it evaluates whether the privacy policies of third-party skills made available through its skill store disclose how long they retain user information?
    9. (For personal digital assistant ecosystems): Does the company clearly disclose that it evaluates whether the privacy policies of third-party skills made available through its skill store state that all user information is deleted when users terminate their accounts or delete the skill?
view research guidance

top of section

P7. Users’ control over their own user information

The company should clearly disclose to users what
options they have to control the company’s collection, inference, retention and use of their user information.

Elements:

    1. For each type of user information the company collects, does the company clearly disclose
      whether users can control the company’s collection of this user information?
    2. For each type of user information the company collects, does the company clearly disclose
      whether users can delete this user information?
    3. For each type of user information the company infers on the basis of collected information,
      does the company clearly disclose whether users can control if the company can attempt to infer this user information?
    4. For each type of user information the company infers on the basis of collected information,
      does the company clearly disclose whether users can delete this user information?
    5. Does the company clearly disclose that it provides users with options to control how their user information is used for targeted advertising?
    6. Does the company clearly disclose that targeted advertising is off by default?
    7. Does the company clearly disclose that it provides users with
      options to control how their user information is used for the development of algorithmic systems?
    8. Does the company clearly disclose whether it uses user information to develop algorithmic systems by default, or not?
    9. (For mobile ecosystems and personal digital assistant ecosystems): Does the company clearly disclose that it provides users with
      options to control the device’s geolocation functions?
view research guidance

top of section

P8. Users’ access to their own user information

Companies should allow users to obtain all of their user information the company holds.

Elements:

    1. Does the company clearly disclose that users can obtain a copy of their
      user information?
    2. Does the company clearly disclose what user information users can obtain?
    3. Does the company clearly disclose that users can obtain their user information in a structured data format?
    4. Does the company clearly disclose that users can obtain all public-facing and private user information a company holds about them?
    5. Does the company clearly disclose that users can access the list of advertising audience categories to which the company has assigned them?
    6. Does the company clearly disclose that users can obtain all the information that a company has inferred about them?
    7. (For mobile ecosystems): Does the company clearly disclose that it evaluates whether the privacy policies of third-party apps made available through its app store disclose that users can obtain all of the user information about them the app holds?
    8. (For personal digital assistant ecosystems): Does the company clearly disclose that it evaluates whether the privacy policies of third-party skills made available through its skill store state that all user information is deleted when users terminate their accounts or delete the skill?
view research guidance

top of section

P9. Collection of user information from third parties

The company should clearly disclose its practices with regard touser information it collects from third-party websites or apps
through technical means, as well as user information it collects through non-technical means.

Elements:

    1. (For digital platforms) Does the company clearly disclose what user information it collects from third-party websites through technical means?
    2. (For digital platforms) Does the company clearly explain how it collects user information from third parties through technical means?
    3. (For digital platforms) Does the company clearly disclose its purpose for collecting user information from third parties through technical means?
    4. (For digital platforms) Does the company clearly disclose how long it retains the user information it collects from third parties through technical means?
    5. (For digital platforms) Does the company clearly disclose that it respects user-generated signals to opt out of data collection?
    6. Does the company clearly disclose what user information it collects from third parties through non-technical means?
    7. Does the company clearly disclose how it collects user information from third parties through non-technical means?
    8. Does the company clearly disclose its purpose for collecting user information from third parties through
      non-technical means?
    9. Does the company clearly disclose how long it retains the user information it collects from third parties through non-technical means?
view research guidance

top of section

P10. Process for responding to demands for user information

P10(a). Process for responding to government demands for user information

The company should clearly disclose its process for responding to governments demands for user information.

Elements:

    1. Does the company clearly disclose its process for responding to non-judicial government demands?
    2. Does the company clearly disclose its process for responding to court orders?
    3. Does the company clearly disclose its process for responding to government demands from foreign jurisdictions?
    4. Do the company’s explanations clearly disclose the legal basis under which it may comply with government demands?
    5. Does the company clearly disclose that it carries out due diligence on government demands before deciding how to respond?
    6. Does the company commit to push back on inappropriate or overbroad government demands?
    7. Does the company provide clear guidance or examples of implementation of its process for government demands?
view research guidance

top of section

P10(b). Process for responding to private requests for user information

The company should clearly disclose its process for responding to requests for user information that come through private processes.

Elements:

    1. Does the company clearly disclose its process for responding to requests made through private processes?
    2. Do the company’s explanations clearly disclose the basis under which it may comply with requests made through private processes?
    3. Does the company clearly disclose that it carries out due diligence on requests made through private processes before deciding how to respond?
    4. Does the company commit to push back on inappropriate or overbroad requests made through private processes?
    5. Does the company provide clear guidance or examples of implementation of its process of responding to requests made through private processes?
view research guidance

top of section

P11. Data about demands for user information

P11(a). Data about government demands for user information

The company should regularly publish data about government
demands
for user information.

Elements:

    1. Does the company list the number of government
      demands
      it receives by country?
    2. Does the company list the number of government
      demands
      it receives for stored user information and for real-time communications access?
    3. Does the company list the number of accounts affected?
    4. Does the company list whether a demand sought communications content or non-content or both?
    5. Does the company identify the specific legal authority or type of legal process through which law enforcement and national security demands are made?
    6. Does the company include government demands
      that come from court orders?
    7. Does the company list the number of government
      demands
      it complied with, broken down by category of demand?
    8. Does the company list what types of government
      demands
      it is prohibited by law from disclosing?
    9. Does the company report this data at least once per year?
    10. Can the data reported by the company be exported as a structured data file?
view research guidance

top of section

P11(b). Data about private requests for user information

The company should regularly publish data about requests for user information that come through private processes.

Elements:

    1. Does the company list the number of requests it receives for user information
      that come through private processes?
    2. Does the company list the number of requests for user information that come through private processes with which it complied?
    3. Does the company report this data at least once per year?
    4. Can the data reported by the company be exported as a structured data file?
view research guidance

top of section

P12. User notification about third-party requests for user information

The company should notify users to the extent legally possible when their user information has been demanded by governments and other third parties.

Elements:

    1. Does the company clearly disclose that it notifies users when government entities (including courts or other judicial bodies) demand their user information?
    2. Does the company clearly disclose that it notifies users when they receive requests for their user information through private processes?
    3. Does the company clearly disclose situations when it might not notify users, including a description of the types of government demands it is prohibited by law from disclosing to users?
view research guidance

top of section

P13. Security oversight

The company should clearly disclose information about its institutional processes to ensure the security of its products and services.

Elements:

    1. Does the company clearly disclose that it has systems in place to limit and monitor employee access to user information?
    2. Does the company clearly disclose that it has a security team that conducts security audits on the company’s products and services?
    3. Does the company clearly disclose that it commissions third-party security audits on its products and services?
view research guidance

top of section

P14. Addressing security vulnerabilities

The company should address security vulnerabilities when they are discovered.

Elements:

    1. Does the company clearly disclose that it has a mechanism through which security researchers can submit vulnerabilities they discover?
    2. Does the company clearly disclose the timeframe in which it will review reports of vulnerabilities?
    3. Does the company commit not to pursue legal action against researchers who report vulnerabilities within the terms of the company’s reporting mechanism?
    4. (For mobile ecosystems and personal digital assistant ecosystems) Does the company clearly disclose that software updates, security patches, add-ons, or extensions are downloaded over an encrypted channel?
    5. (For mobile ecosystems and telecommunications companies) Does the company clearly disclose what, if any, modifications it has made to a mobile operating system?
    6. (For mobile ecosystems, personal digital assistant ecosystems, and telecommunications companies) Does the company clearly disclose what, if any, effect such modifications have on the company’s ability to send security updates to users?
    7. (For mobile ecosystems and personal digital assistant ecosystems) Does the company clearly disclose the date through which it will continue to provide security updates for the device/OS?
    8. (For mobile ecosystems and personal digital assistant ecosystems) Does the company commit to provide security updates for the operating system and other critical software for a minimum of five years after release?
    9. (For mobile ecosystems, personal digital assistant ecosystems, and telecommunications companies) If the company uses an operating system adapted from an existing system, does the company commit to provide security patches within one month of a vulnerability being announced to the public?
    10. (For personal digital assistant ecosystems): Does the company clearly disclose what, if any, modifications it has made to a personal digital assistant operating system?
    11. (For personal digital assistant ecosystems): Does the company clearly disclose what, if any, effect such modifications have on the company’s ability to send security updates to users?
view research guidance

top of section

P15. Data breaches

The company should publicly disclose information about its processes for responding to data breaches.

Elements:

    1. Does the company clearly disclose that it will notify the relevant authorities without undue delay when a data breach occurs?
    2. Does the company clearly disclose its process for notifying data subjects who might be affected by a data breach?
    3. Does the company clearly disclose what kinds of steps it will take to address the impact of a data breach on its users?
view research guidance

top of section

P16. Encryption of user communication and private content (digital platforms)

The company should encrypt user communication and private content so users can control who has access to it.

Elements:

    1. Does the company clearly disclose that the transmission of user communications is encrypted by default?
    2. Does the company clearly disclose that transmissions of user communications are encrypted using unique keys?
    3. Does the company clearly disclose that users can secure their private content using end-to-end encryption, or full-disk encryption (where applicable)?
    4. Does the company clearly disclose that end-to-end encryption, or full-disk encryption, is enabled by default?
view research guidance

top of section

P17. Account security (digital platforms)

The company should help users keep their accounts secure.

Elements:

    1. Does the company clearly disclose that it deploys advanced authentication methods to prevent fraudulent access?
    2. Does the company clearly disclose that users can view their recent account activity?
    3. Does the company clearly disclose that it notifies users about unusual account activity and possible unauthorized access to their accounts?
view research guidance

top of section

P18. Inform and educate users about potential risks

The company should publish information to help users defend themselves against cybersecurity risks.

Elements:

    1. Does the company publish practical materials that educate users on how to protect themselves from cybersecurity risks relevant to their products or services?
view research guidance

top of section

Glossary

Note: This is not a general glossary. The definitions and explanations provided below were written specifically to guide researchers in evaluating ICT companies on this project’s research indicators.

Account / user account — A collection of data associated with a particular user of a given computer system, service, or platform. At a minimum, the user account comprises a username and password, which are used to authenticate the user’s access to his/her data.

Account restriction / restrict a user’s account — Limitation, suspension, deactivation, deletion, or removal of a specific user account or permissions on a user’s account.

Advertisement — A message that an advertiser has paid a company to display to a subset of its users, consisting of both advertising content and targeting parameters.

Advertiser — A person or entity that has created and/or paid for advertising content. The advertiser typically determines the targeting parameters for each advertisement.

Advertising audience categories — Groups of users, identified for the purpose of delivering targeted advertising, who share certain characteristics and/or interests, as determined on the basis of user information that a company has either collected or inferred.

Advertising content policies — Documents that outline a company’s rules governing what advertising content are permitted on the platform.

Advertising content — Any content that someone has paid a company to display to its users.

Advertising network — A company or service that connects advertisers to websites that want to host advertisements. The key function of an ad network is aggregation of ad space supply from publishers and matching it with advertiser demand.

Advertising targeting policies — Documents that outline a company’s rules governing what advertising targeting parameters are permitted on the platform.

Advertising technologies — Algorithmic decision-making systems that determine which users will be shown a specific piece of advertising content. This determination may take into account the targeting parameters set by the advertiser, or it may be fully automated.

Affected user — The user who posted content that was restricted by a moderation action or the user associated with a user account that was restricted by a moderation action, and, if applicable, the user(s) who submitted the flag that led to the consideration of a piece of content or an account for a moderation action

Algorithms — An algorithm is a set of instructions used to process information and deliver an output based on the instructions’ stipulations. Algorithms can be simple pieces of code but they can also be incredibly complex, “encoding for thousands of variables across millions of data points.” In the context of internet, mobile, and telecommunications companies, some algorithms—because of their complexity, the amounts and types of user information fed into them, and the decision-making function they serve—have significant implications for users’ human rights, including freedom of expression and privacy. See more at: “Algorithmic Accountability: A Primer,” Data & Society: https://datasociety.net/wp-content/uploads/2018/04/Data_Society_Algorithmic_Accountability_Primer_FINAL-4.pdf

Algorithmic content curation, recommendation, and/or ranking system — A system that uses algorithms, machine learning and other automated decision-making technologies to manage, shape, and govern the flow of content and information on a platform, typically in a way that is personalized to each individual user.

Algorithmic system development policies — Documents that outline a company’s practices related to the development and testing of algorithms, machine learning and automated decision-making.

Algorithmic system use policies — Documents that outline a company’s practices involving the use of algorithms, machine learning and automated decision-making.

Algorithmic system — A system that uses algorithms, machine learning and/or related technologies to automate, optimize and/or personalize decision-making processes.

Anonymous data — Data that is in no way connected to another piece of information that could enable a user to be identified. The expansive nature of this definition used by the Ranking Digital Rights project is necessary to reflect several facts. First, skilled analysts can de-anonymize large data sets. This renders nearly all promises of anonymization unattainable. In essence, any data tied to an “anonymous identifier” is not anonymous; rather, this is often pseudonymous data which may be tied back to the user’s offline identity. Second, metadata may be as or more revealing of a user’s associations and interests than content data, thus this data is of vital interest. Third, entities that have access to many sources of data, such as data brokers and governments, may be able to pair two or more data sources to reveal information about users. Thus, sophisticated actors can use data that seems anonymous to construct a larger picture of a user.

Automated flag — A flag that originates with an algorithmic system. See also: human-submitted flag.

App — A self-contained program or piece of software designed to fulfill a particular purpose; a software application, especially as downloaded by a user to a mobile device.

App store — The platform through which a company makes its own apps as well as those created by third-party developers available for download. An app store (or app marketplace) is a type of digital distribution platform for computer software, often in a mobile context.

Appeal — For RDR’s purposes, this definition of appeals includes processes through which users request a formal change to a content moderation or account restriction decision made by a company.

Artificial intelligence — Artificial intelligence has an array of uses and meanings. For the purposes of RDR’s methodology, artificial intelligence refers to systems that resemble, carry out, or mimic functions that are typically thought of as requiring intelligence. Examples include facial recognition software, natural language processing, and others, the use of which by internet, mobile, and telecommunications companies have implications for people’s freedom of expression and privacy rights. See: “Privacy and Freedom of Expression in the Age of Artificial Intelligence,” https://privacyinternational.org/sites/default/files/2018-04/Privacy%20and%20Freedom%20of%20Expression%20%20In%20the%20Age%20of%20Artificial%20Intelligence.pdf

Automated decision-making — Technology that makes decisions without significant human oversight or input in the decision-making process, such as through the use of artificial intelligence or algorithms.

Board of directors — Board-level oversight should involve members of the board having direct oversight of issues related to freedom of expression and privacy. This does not have to be a formal committee, but the responsibility of board members in overseeing company practices on these issues should be clearly articulated and disclosed on the company’s website.

Bot — An automated online account where all or substantially all of the actions or posts of that account are not the result of a person.

Botnet — A coordinated network of bots that act in concert, usually because they are under the control of the same person or entity.

Bot policy — A document that outlines a company’s rules governing the use of bots to generate content, disseminate content, or perform other actions. May be part of the company’s terms of service or other document.

Collected user information — User information that a company either observes directly or acquires from a third party.

Curate, recommend, and/or rank — The practice of using algorithms, machine learning and other automated decision-making systems to manage, shape, and govern the flow of content and information on a platform, typically in a way that is personalized to each individual user.

Change log — A record that depicts the specific changes in a document, in this case, a terms of service or privacy policy document.

Clearly disclose(s) — The company presents or explains its policies or practices in its public-facing materials in a way that is easy for users to find and understand.

Collect / Collection — All means by which a company may gather information about users. For example, a company may collect this information directly in a range of situations, including when users upload content for public sharing, submit phone numbers for account verification, transmit personal information in private conversation with one another, etc. A company may also collect this information indirectly, for example, by recording log data, account information, metadata, and other related information that describes users and/or documents their activities.

Cookie(s) — “Cookies are a web technology that let websites recognize your browser. Cookies were originally designed to allow sites to offer online shopping carts, save preferences or keep you logged on to a site. They also enable tracking and profiling so sites can recognize you and learn more about where you go, which devices you use, and what you are interested in – even if you don’t have an account with that site, or aren’t logged in.” Source:https://ssd.eff.org/en/glossary/cookies.

Content — The information contained in wire, oral, or electronic communications (e.g., a conversation that takes place over the phone or face-to-face, the text written and transmitted in an SMS or email).

Content restriction — An action the company takes that renders an instance of user-generated content invisible or less visible on the platform or service. This action could involve removing the content entirely or take a less absolute form, such as as hiding it from only certain users (eg inhabitants of some country or people under a certain age), limiting users’ ability to interact with it (eg making it impossible to “like”), adding counterspeech to it (eg corrective information on anti-vaccine posts), or reducing the amount of amplification provided by the platform’s curation systems.

Content-moderation action — Content moderation is the practice of screening user-generated content posted to internet sites, social media, and other online outlets, in order to determine the appropriateness of the content for a given site, locality, or jurisdiction. The process can result in the content being removed or restricted by a moderator, acting as an agent of the platform or site in question. Increasingly, companies in addition to human moderators rely on algorithmic systems to moderate content and information on their platforms. Source: https://doi.org/10.1007/978-3-319-32001-4_44-1

Core functionality — The most essential functions or affordances of a product or service. For example, a smartphone’s core functionality would include making and receiving phone calls, text messages and emails, downloading and running apps, and accessing the internet.

Court orders — Orders issued by a court, including in both criminal and civil cases.

Critical (software) update — A widely released fix for a product-specific, security-related vulnerability. Security vulnerabilities are rated by their severity: critical, important, moderate, or low.

Cybersecurity risks” — Situations in which a user’s security, privacy, or other related rights might be threatened by a malicious actor (including but not limited to criminals, insiders, or nation states) who may gain unauthorized access to user data using hacking, phishing, or other deceptive techniques.

Data breach — A data breach occurs when an unauthorized party gains access to user information that a company collects, retains, or otherwise processes, and which compromises the integrity, security, or confidentiality of that information.

Data inference — Companies are able to draw inferences and predictions about the behaviors, preferences, and private lives of its users by applying “big data” analytics and algorithmic decision making technologies. These methods might be used to make inferences about user preferences or attributes (e.g., race, gender, sexual orientation), and opinions (e.g., political stances), or to predict behaviors (e.g., to serve advertisements). Without sufficient transparency and user control over data inference, privacy-invasive and non-verifiable inferences cannot be predicted, understood, or refuted by users. For more see: Wachter, Sandra and Mittelstadt, Brent, A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI (October 5, 2018). Columbia Business Law Review, 2019(2),

Data minimization — According to the principle of data minimization, companies should limit the collection of users’ information to that which is relevant and necessary to accomplishing a clearly specified purpose. See also: use limitation (below)

De-identified (user information) — This refers to user information that companies collect and retain but only after removing or obscuring any identifiable information from it. This means removing explicit identifiers like names, email addresses, and any government-issued ID numbers, as well as identifiers like IP addresses, cookies, and unique device numbers.

Developer/third-party developer — An individual (or group of individuals) who creates a software program or application that is distributed through a company’s app store.

Device/handheld device/mobile device — A physical object, such as a smartphone or feature phone, used to access telecommunication networks that is designed to be carried by the user and used in a variety of locations.

Digital platforms — For the purposes of the RDR Index methodology, digital platforms refers to a category of the RDR Index that includes internet and mobile ecosystem companies as well as companies that operate e-commerce services and personal digital assistant ecosystems.

Discrimination — For the purpose of the RDR Index, discrimination refers to the practice of treating particular people, companies, or products differently from others, especially in an unfair way. Source: Cambridge Business English dictionary, https://dictionary.cambridge.org/dictionary/english/discrimination.

Directly notify/direct notification — By direct notification, we mean that when a company changes or updates its policy that applies to a particular service, we expect the company to notify users of these changes via the service. The method of direct notification may differ according to the type of service. For services that contain user accounts, direct notification may involve sending an email or an SMS. For services that do not require a user account, direct notification may involve posting a prominent notice on the main page where users access the service.

Documentation — The company provides records that users can consult, such as a log of changes to terms of service or privacy policy documents.

Do Not Track — Also known by the acronym “DNT,” this refers to a setting in a user’s browser preferences that tells companies or third parties not to “track” them. In other words, every time a user loads a website, any parties that are involved in delivering the page (of which there are often many, primarily advertisers) are told not to collect or store any information about the user’s visit to the page. However, this is merely a polite request; a company may ignore a DNT request, and many do.

Easy to find — The terms of service or privacy policy is located one or two clicks away from the homepage of the company or service, or is located in a logical place where users are likely to find it.

Easy to understand / understandable manner — The company has taken steps to help users actually understand its terms of service and privacy policy. This includes, but is not limited to, providing summaries, tips, or guidance that explain what the terms mean, using section headers, readable font size, or other graphic features to help users understand the document, or writing the terms using readable syntax.

Encryption — This essentially hides the content of communications or files so only the intended recipient can view it. The process uses an algorithm to convert the message (plaintext) into a coded format (ciphertext) so that the message looks like a random series of characters to anyone who looks at it. Only someone who has the appropriate encryption key can decrypt the message, reversing the ciphertext back into plaintext. Data can be encrypted when it is stored and when it is in transmission.

For example, users can encrypt the data on their hard drive so that only the user with the encryption key can decipher the contents of the drive. Additionally, users can send an encrypted email message, which would prevent anyone from seeing the email contents while the message is moving through the network to reach the intended recipient. With encryption in transit (for example, when a website uses HTTPS), the communication between a user and a website is encrypted, so that outsiders, such as the user’s internet service provider, can only see the initial visit to the website, but not what the user communicates on that website, or the sub-pages that the user visits. For more information, see this resource: http://www.explainthatstuff.com/encryption.html
.
End-to-end encryption — With end-to-end encryption, only the sender and receiver can read the content of the encrypted communications. Third parties, including the company, would not be able to decode the content.

Engage — Interactions between the company and stakeholders. Companies or stakeholders can initiate these interactions, and they can take various formats, including meetings, other communication, etc.

Engagement metrics — Numbers describing the popularity of a piece of content or account on the platform, for example followers, connections, contacts, friends, comments, likes, retweets, etc.

Executive-level oversight — The executive committee or a member of the company’s executive team directly oversees issues related to freedom of expression and privacy.

Explicit — The company specifically states its support for freedom of expression and privacy.

Flag — The process of alerting a company that a piece of content or account may be in violation of the company’s rules, or the signal that conveys this information to the company. This process can occur either within the platform or through an external process. Flaggers include users, algorithmic systems, company staff, governments, and other private entities.

Flagger — An individual or entity that alerts a company that a piece of content or account may be in violation of the company’s rules. This process can occur either within the platform or through an external process. Flaggers include users, algorithmic systems, company staff, governments, and other private entities.

Forward secrecy / “perfect forward secrecy” — An encryption method notably used in HTTPS web traffic and in messaging apps, in which a new key pair is generated for each session (HTTPS), or for each message exchanged between the parties (messaging apps). This way, if an adversary obtains one decryption key, it will not be able to decrypt past or future transmissions or messages in the conversation. Forward secrecy is distinct from end-to-end encryption, which refers to the data being encrypted while “at rest” on remote company servers. For more, see “Pushing for Perfect Forward Secrecy,” Electronic Frontier Foundation, https://www.eff.org/deeplinks/2013/08/pushing-perfect-forward-secrecy-important-web-privacy-protection.

Full-disk encryption — Comprehensive encryption of all data stored on a physical device, in such a way that only the user is able to access the content by providing the user-generated password(s) and/or other means of decryption (fingerprint, two-factor authentication code, physical token, etc.)

Geolocation — Identification of the real-world geographic location of an object, such as a radar source, mobile phone or internet-connected computer terminal. Geolocation may refer to the practice of assessing the location, or to the actual assessed location.

Government demands — This includes demands from government ministries or agencies, law enforcement, and court orders in criminal and civil cases.

Government-issued identification — An official document with or without a photo issued by the government that can be used to prove a person’s identity. This includes government ID or any form of documentation that identifies the person by physical location, family, or community. This also includes phone numbers, which are, in many jurisdictions, connected to a person’s offline identity.

Grievance — RDR takes its definition of grievance from the UN Guiding Principles: “[A] perceived injustice evoking an individual’s or a group’s sense of entitlement, which may be based on law, contract, explicit or implicit promises, customary practice, or general notions of fairness of aggrieved communities.” (p. 32 of 42.) Source: “Guiding Principles on Business and Human Rights: Implementing the United Nations ‘Protect, Respect and Remedy Framework,” 2011, ,http://www.ohchr.org/Documents/Publications/GuidingPrinciplesBusinessHR_EN.pdf.

Human Rights Impact Assessments (HRIA) — HRIAs are a systematic approach to due diligence. A company carries out these assessments or reviews to see how its products, services, and business practices affect the freedom of expression and privacy of its users.
For more information about Human Rights Impact Assessments and best practices in conducting them, see this special page hosted by the Business & Human Rights Resource Centre: https://business-humanrights.org/en/un-guiding-principles/implementation-tools-examples/implementation-by-companies/type-of-step-taken/human-rights-impact-assessments

The Danish Institute for Human Rights has developed a related Human Rights Compliance Assessment tool (https://hrca2.humanrightsbusiness.org), and BSR has developed a useful guide to conducting a HRIA:http://www.bsr.org/en/our-insights/bsr-insight-article/how-to-conduct-an-effective-human-rights-impact-assessment

For guidance specific to the ICT sector, see the excerpted book chapter (“Business, Human Rights and the Internet: A Framework for Implementation”) by Michael Samway on the project website at:http://rankingdigitalrights.org/resources/readings/samway_hria.

Human-submitted flag — A flag that originates with a human being, such as a user, company employee or contractor, government employee or representative, or a human employee or representative of a private entity. See also: automated flag.

Layered policy documents — Terms of service and privacy policies that are divided into hyperlinked sections, allowing users to directly navigate to the section they are interested in viewing.

Location data — Information collected by a network or service about where the user’s phone or other device is or was located—for example, tracing the location of a mobile phone from data collected by base stations on a mobile phone network or through GPS or Wi-FI positioning.

Malware — An umbrella term used to refer to a variety of forms of hostile or intrusive software, including computer viruses, worms, trojan horses, ransomware, spyware, adware, scareware, and other malicious programs. It can take the form of executable code, scripts, active content, or other software.

Management-level oversight — A committee, program, team, or officer that is not part of the company’s board of directors or the executive team.

Mobile ecosystem — The indivisible set of goods and services offered by a mobile device company, comprising the device hardware, operating system, app store, and user account.

Modifications to a mobile operating system — Changes made to the stock version of a mobile OS that may affect core functionality, the user experience, or the process of deploying software updates.The core functionality is the most essential functions or affordances of a product or service. For example, a smartphone’s core functionality would include sending and receiving phone calls, text messages, and emails, downloading and running apps, and accessing the internet. This applies to Android smartphones produced by companies other than Google.

Multi-stakeholder initiative — A credible multi-stakeholder organization includes and is governed by members of at least three other stakeholder groups besides industry: civil society, investors, academics, at-large user or customer representatives, technical community, and/or government. Its funding model derives from more than one type of source (corporations, governments, foundations, public donations, etc.). Its independence, rigor, and professionalism are of a high standard, with strong participation by human rights organizations that themselves have solid track records of independence from corporate and/or government control. The Global Network Initiative is an example of a multi-stakeholder initiative focused on freedom of expression and privacy in the ICT sector.

Non-content — Data about an instance of communication or about a user. Companies may use different terms to refer to this data, including metadata, basic subscriber information, non-content transactional data, account data, or customer information.

In the U.S., the Stored Communications Act defines non-content customer communications or records as, “name; address; local and long distance telephone connection records, or records of session times and durations; length of service (including start date) and types of service utilized; telephone or instrument number or other subscriber number or identity (including any temporarily assigned network address); and means and source of payment for such service (including any credit card or bank account number).” The European Union’s Handbook on European Data Protection Law states, “Confidentiality of electronic communications pertains not only to the content of a communication but also to traffic data, such as information about who communicated with whom, when and for how long, and location data, such as from where data were communicated.”

Non-judicial government demands — These are requests that come from government entities that are not judicial bodies, judges, or courts. They can include requests from government ministries, agencies, police departments, police officers (acting in official capacity), and other non-judicial government offices, authorities, or entities.

Non-technical means — Companies can acquire user information through non-technical means, such as through purchases, data-sharing agreements, and other contractual relationships with third parties.This acquired data can become part of a “digital dossier” that companies may hold on its users, which can then form the basis for inferred and shared user information

Notice / notify — The company communicates with users or informs users about something related to the company or service.

Officer — A senior employee accountable for an explicit set of risks and impacts, in this case privacy and freedom of expression.

Operating system (OS) — The software that supports a computer’s basic functions, such as scheduling tasks, executing applications, and controlling peripherals. A mobile operating system is the OS for a mobile device such as a smartphone or tablet.

Options to control — The company provides the user with a direct and easy-to-understand mechanism to opt-in or opt-out of data collection, use, or sharing. “Opt-in” means the company does not collect, use, or share data for a given purpose until users explicitly signal that they want this to happen. “Opt-out” means the company uses the data for a specified purpose by default, but will cease doing so once the user tells the company to stop. Note that this definition is potentially controversial as many privacy advocates believe only “opt-in” constitutes acceptable control. However, for the purposes of RDR, we have elected to count “opt-out” as a form of control.

Oversight / oversee — The company’s governance documents or decision-making processes assign a committee, program, team, or officer with formal supervisory authority over a particular function. This group or person has responsibility for the function and is evaluated based on the degree to which it meets that responsibility.

Patch — A piece of software designed to update a computer program or its supporting data, to fix or improve it. This includes fixing security vulnerabilities and other bugs, with such patches usually called bugfixes or bug fixes, and improving the usability or performance of the computer program, application, or operating system.

Personal digital assistant ecosystem — A personal digital assistant (PDA) ecosystem consists of an artificial intelligence-powered interface installed on digital devices that can interact with users through text or voice to access information on the Internet and perform certain tasks with personal data shared by the users. Users can interact with PDA ecosystems through skills, which are either made available by third-party developers/providers or the PDA itself.

Platform — A computing platform is, in the most general sense, whatever a pre-existing piece of computer software or code object is designed to run within, obeying its constraints, and making use of its facilities. The term computing platform can refer to different abstraction levels, including a certain hardware architecture, an operating system (OS), and runtime libraries.[1] In total it can be said to be the stage on which computer programs can run.

Policy commitment — A publicly available statement that represents official company policy which has been approved at the highest levels of the company.

Privacy policies — Documents that outline a company’s practices involving the collection and use of information, especially information about users.

Private processes — Requests made through a private process rather than a judicial or governmental process. Private requests to restrict content or accounts can come from a self-regulatory body such as the Internet Watch Foundation, or a notice-and-takedown system, such as the U.S. Digital Millennium Copyright Act. For more information on notice-and-takedown, as well as the DMCA specifically, see the recent UNESCO report, “Fostering Freedom Online: The Role of Internet Intermediaries” at http://unesdoc.unesco.org/images/0023/002311/231162e.pdf(p. 40-52 of 211). Private requests for user data are often informal and do not involve a formal legal process. According to the Wikimedia Foundation, which produces transparency reports that disclose data on the number of these types of requests it receives, private requests for user information includes cases in which another company sends them a letter or an email requesting “non-public information” about one of its users. This could include a user’s IP address and email.

Prioritization — Prioritization occurs when a network operator “manage[s] its network in a way that benefits particular content, applications, services, or devices.” For RDR’s purposes, this definition of prioritization includes a company’s decision to block access to a particular application, service, or device.

Source: U.S Federal Communications Commission’s 2015 Open Internet Rules, p. 7 of 400, https://apps.fcc.gov/edocs_public/attachmatch/FCC-15-24A1.pdf

Protocol — A set of rules governing the exchange or transmission of data between devices.

Public archive — A publicly available resource that contains previous versions of a company’s policies, such as its terms of service or privacy policy, or comprehensively explains each round of changes the company makes to these policies.

Public third-party archive —Ideally, companies publish information about the requests they receive so that the public has a better understanding of how content gets restricted on the platform. Companies may provide information about the requests they receive to a third-party archive, such as Lumen (formerly called Chilling Effects), which is an independent research project that manages a publicly available database of requests for removal of online content. This type of repository helps researchers and the public understand the types of content that are requested for removal, as well as gain a better understanding of legitimate and illegitimate requests.

Real-time communications access — Surveillance of a conversation or other electronic communication in “real time” while the conversation is taking place, or interception of data at the very moment it is being transmitted. This is also sometimes called a “wiretap.” Consider the difference between a request for a wiretap and a request for stored data. A wiretap gives law enforcement authority to access future communications, while a request for stored data gives law enforcement access to records of communications that occurred in the past. The U.S. government can gain real-time communications access through the Wiretap Act and Pen Register Act, both part of the Electronic Communications Privacy Act (ECPA); the Russian government can do so through the “System for Operative Investigative Activities” (SORM).

Remedy — “Remedy may include apologies, restitution, rehabilitation, financial or non-financial compensation and punitive sanctions (whether criminal or administrative, such as fines), as well as the prevention of harm through, for example, injunctions or guarantees of non-repetition. Procedures for the provision of remedy should be impartial, protected from corruption and free from political or other attempts to influence the outcome.” (p. 22 of 27.)

Source: “Report of the Special Representative of the Secretary-General on the issue of human rights and transnational corporations and other business enterprises, John Ruggie. Guiding Principles on Business and Human Rights: Implementing the United Nations ‘Protect, Respect and Remedy’ Framework,” 2011.
http://business-humanrights.org/sites/default/files/media/documents/ruggie/ruggie-guiding-principles-21-mar-2011.pdf

Require — The requirement may take place at the time a user signs up for an account or later, upon company request.

Retention of user information — A company may collect data and then delete it. If the company does not delete it, the data is “retained.” The time between collection and deletion is the “retention period”. Such data may fall under our definition of “user information,” or it may be anonymous. Keep in mind that truly anonymous data may in no way be connected to a user, the user’s identity, behavior, or preference, which is very rare.

A related topic is the “retention period.” For example, a company may collect log data on a continual basis, but purge (delete) the data once a week. In this case, the data retention period is one week. However, if no retention period is specified, the default assumption must be that the data is never deleted, and the retention period is therefore indefinite. In many cases users may wish for their data to be retained while they are actively using the service, but would like it to be deleted (and therefore not retained) if and when they quit using the service. For example, users may want a social network service to keep all of their private messages, but when the user leaves the network they may wish that all of their private messages be deleted.

Roll out — A series of related product announcements that are staged over time; the process of making patches, software updates, and software upgrades available to end users.

Skills — Skills are voice-driven personal digital assistant capabilities allowing users to perform certain tasks or engage with online content using devices equipped with a personal digital assistant. Personal digital assistant ecosystem skills are similar to mobile ecosystem apps: users can enable or disable built-in skills or install skills developed by third-parties through stores similar to app stores.

Skill store — The platform through which a company makes its own skills as well as those created by third-party developers available for download. A skill store (or skill marketplace) is a type of digital distribution platform for computer software.

Security researcher — Someone who studies how to secure technical systems and/or threats to computer and network security in order to find a solution.

Security update — A widely released fix for a product-specific, security-related vulnerability. Security vulnerabilities are rated by their severity: critical, important, moderate, or low.

Security vulnerability — A weakness which allows an attacker to reduce a system’s information assurance. A vulnerability is the intersection of three elements: a system susceptibility or flaw, attacker access to the flaw, and attacker capability to exploit the flaw.

Senior executives — CEO and/or other members of the executive team as listed by the company on its website or other official documents such as an annual report. In the absence of a company-defined list of its executive team, other chief-level positions and those at the highest level of management (e.g., executive/senior vice president, depending on the company) are considered senior executives.

Shares / sharing — The company allows a third party to access user information, either by freely giving the information to a third party (or the public, or other users) or selling it to a third party.

Shut down or restrict access to the network: Network shutdown refers to the intentional disruption of internet or electronic communications, including telecom services such as cellular telephony and SMS. This includes a blanket shut down of all cellular or internet services within a geographic area and targeted blocking of specific services, such as social media or messaging apps.

Software update — A software update (also sometimes called a software patch) is a free download for an application or software suite that provides fixes for features that aren’t working as intended or adds minor software enhancements and compatibility. An update can also include driver updates that improve the operation of hardware or peripherals, or add support for new models of peripherals.

Software upgrade — A software upgrade is a new version of a piece of software that offers a significant change or improvement over the current version.

Stakeholders — People who have a “stake” because they are affected in some way by a company’s actions or decisions. Note that stakeholders are not the same as “rights holders” and that there are different kinds of stakeholders: those who are directly affected, and “intermediary stakeholders” whose role is to advocate for the rights of direct stakeholders. Rights holders are the individuals whose human rights could be directly impacted. They interact with the company and its products and services on a day-to-day basis, typically as employees, customers, or users. Intermediary stakeholders include individuals and organizations informed about and capable of speaking on behalf of rights holders, such as civil society organizations, activist groups, academics, opinion formers, and policymakers.” (p. 10 of 28). Source: Stakeholder Engagement in Human Rights Due Diligence: Challenges and Solutions for ICT Companies by BSR, Sept. 2014 http://www.bsr.org/reports/BSR_Rights_Holder_Engagement.pdf

Stakeholder engagement — Interactions between the company and stakeholders. Companies or stakeholders can initiate these interactions, and they can take various formats, including meetings, other communication, etc.

Structured data — “Data that resides in fixed fields within a record or file. Relational databases and spreadsheets are examples of structured data. Although data in XML files are not fixed in location like traditional database records, they are nevertheless structured, because the data are tagged and can be accurately identified.” Conversely, unstructured data is data that “does not reside in fixed locations. The term generally refers to free-form text, which is ubiquitous. Examples are word processing documents, PDF files, e-mail messages, blogs, Web pages and social sites.” Sources: PC Mag Encyclopedia: “structured data” http://www.pcmag.com/encyclopedia/term/52162/structured-data
“unstructured data” http://www.pcmag.com/encyclopedia/term/53486/unstructured-data

Targeted advertising — Targeted advertising, also known as “interest-based advertising,” “personalized advertising,” or “programmatic advertising,” refers to the practice of delivering tailored ads to users based on their browsing history, location information, social media profiles and activities, as well as demographic characteristics and other features. Targeted advertising relies on vast data collection practices, which can involve tracking users’ activities across the internet using cookies, widgets, and other tracking tools, in order to create detailed user profiles.

Targeting parameters — The conditions, typically set by the advertiser, that determine which users will be shown the advertising content in question. This can include users’ demographics, location, behavior, interests, connections, and other user information

Team / program — A defined unit within a company that has responsibility over how the company’s products or services intersect with, in this case, freedom of expression and/or privacy.

Technical means — Companies deploy various technologies, such as cookies, widgets and buttons to track users’ activity on their services and on third-party sites and services. For example, a company may embed content on a third-party website and collect user information when a user “likes” or otherwise interacts with this content.

Terms of service — This document may also be called Terms of Use, Terms and Conditions, etc. The terms of service “often provide the necessary ground rules for how various online services should be used,” as stated by the EFF, and represent a legal agreement between the company and the user. Companies can take action against users and their content based on information in the terms of service. Source: Electronic Frontier Foundation, “Terms of (Ab)use” https://www.eff.org/issues/terms-of-abuse

Third party — A “party” or entity that is anything other than the user or the company. For the purposes of this methodology, third parties can include government organizations, courts, or other private parties (e.g., a company, an NGO, an individual person).

Throttling — A blunt form of traffic shaping in which a network operator slows the flow of packets through a network. Mobile operators may throttle traffic to enforce data caps. For more information, see: Open Signal, “Data throttling: Why operators slow down your connection speed,” http://opensignal.com/blog/2015/06/16/data-throttling-operators-slow-connection-speed/

Traffic shaping — Adjusting the flow of traffic through a network. This can involve conditionally slowing certain types of traffic. Traffic shaping can be used for legitimate network management purposes (e.g., prioritizing VoIP traffic ahead of normal web traffic to facilitate real-time communication) or for reasons that counter net neutrality principles (e.g., intentionally slowing video traffic to dissuade users from using high-bandwidth applications).

Unofficial processes —Processes or channels through which the government makes demands or requests for content or account restrictions instead of official processes, such as law or regulation. For example, a local official may make an order or protest on certain content through an informal channel.

Use/purpose limitation — According to the principle of use or purpose minimization, entities that handle user information should state their purpose for doing so and should limit the use of this information for any other purpose unless they receive consent from the user. See also the principle of data minimization (above).

Users
— Individuals who use a product or service. This includes people who post or transmit the content online as well as those who try to access or receive the content. For indicators in the freedom of expression category, this includes third-party developers who create apps that are housed or distributed through a company’s product or service.

User-generated signals — Many companies allow users to “opt out” of tracking by setting an array of company-specific cookies. If a user deletes cookies in order to protect privacy, they are then tracked until they reset the “opt-out” cookie. Furthermore, some companies may require a user to install a browser add-on to prevent tracking. These two common scenarios are examples of users being forced to use signals which are company-specific, and therefore do not count. Rather, a user-generated signal comes from the user and is a universal message that the user should not be tracked. The primary option for user-generated signals today is the “Do Not Track” header (covered above), but this wording leaves the door open to future means for users to signal they do not want to be tracked.

User information
— Any data that is connected to an identifiable person, or may be connected to such a person by combining datasets or utilizing data-mining techniques. User information may be either collected or inferred. As further explanation, user information is any data that documents a user’s characteristics and/or activities. This information may or may not be tied to a specific user account. This information includes, but is not limited to, personal correspondence, user-generated content, account preferences and settings, log and access data, data about a user’s activities or preferences collected from third parties either through behavioral tracking or purchasing of data, and all forms of metadata. User information is never considered anonymous except when included solely as a basis to generate global measures (e.g. number of active monthly users). For example, the statement, ‘Our service has 1 million monthly active users,’ contains anonymous data, since it does not give enough information to know who those 1 million users are.

Whistleblower program — This is a program through which company employees can report any alleged malfeasance they see within the company, including issues related to human rights. This typically takes the form of an anonymous hotline and is often the responsibility of a chief compliance or chief ethics officer.

Widget — A piece of code allowing a user or company to embed applications and content from one website or service on a different third-party site or service. In some cases, companies use widgets on a third-party website and collect information about visitors to that website without their knowledge.

Zero-rating program — “Zero-rating” refers to the practice of not charging users for data used to access certain online services or platforms. Zero rating is regarded as a type of network prioritization which undermines the principle of network neutrality.

Sign up for the RADAR

Subscribe to our newsletter to stay in touch!