The company should offer policies related to their use of algorithms that are easy for users to find and understand.
Elements:
- Are the company’s algorithmic system use policies easy to find?
- Are the algorithmic system use policies available in the primary language(s) spoken by users in the company’s home jurisdiction?
- Are the algorithmic system use policies presented in an understandable manner?
Definitions:
Algorithms: An algorithm is a set of instructions used to process information and deliver an output based on the instructions’ stipulations. Algorithms can be simple pieces of code but they can also be incredibly complex, “encoding for thousands of variables across millions of data points.” In the context of internet, mobile, and telecommunications companies, some algorithms—because of their complexity, the amounts and types of user information fed into them, and the decision-making function they serve—have significant implications for users’ human rights, including freedom of expression and privacy. See more at: “Algorithmic Accountability: A Primer,” Data & Society: https://datasociety.net/wp-content/uploads/2018/04/Data_Society_Algorithmic_Accountability_Primer_FINAL-4.pdf
Algorithmic system use policies — Documents that outline a company’s practices involving the use of algorithms, machine learning and automated decision-making.
Easy to find — The terms of service or privacy policy is located one or two clicks away from the homepage of the company or service, or is located in a logical place where users are likely to find it.
Easy to understand / understandable manner — The company has taken steps to help users actually understand its terms of service and privacy policy. This includes, but is not limited to, providing summaries, tips, or guidance that explain what the terms mean, using section headers, readable font size, or other graphic features to help users understand the document, or writing the terms using readable syntax.
Indicator guidance: The use of algorithmic systems can have adverse effects on fundamental human rights—and specifically, on the right to freedom of expression and information as well as the right to non-discrimination. In addition to clearly committing to respect and protect human rights as they develop and deploy these technologies (see Indicator G1, Element 3), companies should also publish policies that clearly describe the terms for how they use algorithmic systems across their services and platforms. Similar to having terms of service policies or user agreements that outline the terms for what types of content or activities are prohibited, companies that use algorithmic systems with the potential to cause human rights harms should publish a clear and accessible policy stating the nature and functions of these systems. As recommended by the Council of Europe’s Committee of Ministers to member States on the human rights impacts of algorithmic systems (2020), this policy should be easy to find, presented in plain language, and contain options for users to manage settings.
Note that in this indicator, we are looking for a policy that explains terms for how the company uses algorithmic systems. We also look for companies to disclose terms that outline how companies develop and test algorithmic systems, which is addressed in the Privacy category in draft indicator P1b.
Potential sources:
- Company blogs and portals on AI
No Comments