The company should conduct regular, comprehensive, and credible due diligence, such as through robust human rights impact assessments, to identify how all aspects of its policies and practices related to the development and use of algorithmic systems affect users’ fundamental rights to freedom of expression and information, to privacy, and to non-discrimination, and to mitigate any risks posed by those impacts.
Elements:
- Does the company assess freedom of expression and information risks associated with its development and use of algorithmic systems?
- Does the company assess privacy risks associated with its development and use of algorithmic systems?
- Does the company assess discrimination risks associated with its development and use of algorithmic systems?
- Does the company conduct additional evaluation whenever the company’s risk assessments identify concerns?
- Do senior executives and/or members of the company’s board of directors review and consider the results of assessments and due diligence in their decision-making?
- Does the company conduct assessments on a regular schedule?
- Are the company’s assessments assured by an external third party?
- Is the external third party that assures the assessment accredited to a relevant and reputable human rights standard by a credible organization?
Definitions:
Algorithmic decision-making system — A system that uses algorithms, machine learning and/or related technologies to automate, optimize and/or personalize decision-making processes.
Board of directors — Board-level oversight should involve members of the board having direct oversight of issues related to freedom of expression and privacy. This does not have to be a formal committee, but the responsibility of board members in overseeing company practices on these issues should be clearly articulated and disclosed on the company’s website.
Human Rights Impact Assessments (HRIA)/assess/assessment — HRIAs are a systematic approach to due diligence. A company carries out these assessments or reviews to see how its products, services, and business practices affect the freedom of expression and privacy of its users.
For more information about Human Rights Impact Assessments and best practices in conducting them, see this special page hosted by the Business & Human Rights Resource Centre: https://business-humanrights.org/en/un-guiding-principles/implementation-tools-examples/implementation-by-companies/type-of-step-taken/human-rights-impact-assessments
The Danish Institute for Human Rights has developed a related Human Rights Compliance Assessment tool (https://hrca2.humanrightsbusiness.org), and BSR has developed a useful guide to conducting a HRIA: http://www.bsr.org/en/our-insights/bsr-insight-article/how-to-conduct-an-effective-human-rights-impact-assessment
For guidance specific to the ICT sector, see the excerpted book chapter (“Business, Human Rights and the Internet: A Framework for Implementation”) by Michael Samway on the project website at: http://rankingdigitalrights.org/resources/readings/samway_hria.
Senior executives — CEO and/or other members of the executive team as listed by the company on its website or other official documents such as an annual report. In the absence of a company-defined list of its executive team, other chief-level positions and those at the highest level of management (e.g., executive/senior vice president, depending on the company) are considered senior executives.
Third party – A “party” or entity that is anything other than the user or the company. For the purposes of this methodology, third parties can include government organizations, courts, or other private parties (e.g., a company, an NGO, an individual person).
Indicator guidance: There are a variety of ways in which algorithmic systems may pose harms to human rights. The development of such systems can rely on user information, often without the knowledge or explicit, informed consent of the data subject, constituting a privacy violation. Such systems can also cause or contribute to expression and information harms. In addition, the purpose of many algorithmic decision-making systems is to automate the personalization of users’ experiences on the basis of collected and inferred user information, which risks leading to discrimination. Companies should therefore conduct human rights risk assessments related to their development and use of algorithms, as stated in the Council of Europe’s Recommendation on the human rights impacts of algorithmic systems (2020).
This indicator examines whether companies conduct robust, regular, and accountable human rights risk assessment assessments that evaluate their policies and practices relating to their development and deployment of algorithmic systems. These assessments should be part of the company’s formal, systematic due diligence activities that are aimed at ensuring that a company’s decisions and practices do not cause, contribute to, or exacerbate human rights harms. Assessments enable companies to identify possible risks of their development and deployment of algorithmic systems on users’ human rights and to take steps to mitigate possible harms if they are identified.
Note that this indicator does not expect companies to publish detailed results of their human rights impact assessments, since assessments may include sensitive information. Rather, it expects that companies should disclose that they conduct HRIAs and provide information on what their HRIA process encompasses.
Potential sources:
- Company CSR/sustainability reports
- Company human rights policy
- Regulatory documents (e.g., U.S. Federal Trade Commission)
- Reports from third-party assessors or accreditors
- Global Network Initiative assessment reports
- Company artificial intelligence policies, including AI principles, frameworks, and use guidelines
No Comments