F13. Automated software agents (“bots”)

Companies should clearly disclose policies governing the use of automated software agents (“bots”) on their platforms, products and services, and how they enforce such policies.

Elements:

  1. Does the company clearly disclose rules governing the use of bots on its platform?
  2. Does the company clearly disclose that it requires users to clearly label all content and accounts that are produced, disseminated or operated with the assistance of a bot?
  3. Does the company clearly disclose its process for enforcing its bot policy?
  4. Does the company clearly disclose data on the volume and nature of user content and accounts restricted for violating the company’s bot policy?

Definitions:

Account / user account — A collection of data associated with a particular user of a given computer system, service, or platform. At a minimum, the user account comprises a username and password, which are used to authenticate the user’s access to his/her data.

Account restriction / restrict a user’s account — Limitation, suspension, deactivation, deletion, or removal of a specific user account or permissions on a user’s account.

Bot — An automated online account where all or substantially all of the actions or posts of that account are not the result of a person.

Clearly disclose(s) — The company presents or explains its policies or practices in its public-facing materials in a way that is easy for users to find and understand.

Content — The information contained in wire, oral, or electronic communications (e.g., a conversation that takes place over the phone or face-to-face, the text written and transmitted in an SMS or email).

Content restriction — An action the company takes that renders an instance of user-generated content invisible or less visible on the platform or service. This action could involve removing the content entirely or take a less absolute form, such as as hiding it from only certain users (eg inhabitants of some country or people under a certain age), limiting users’ ability to interact with it (eg making it impossible to “like”), adding counterspeech to it (eg corrective information on anti-vaccine posts), or reducing the amount of amplification provided by the platform’s curation systems.

Indicator guidance: Many of the services evaluated by RDR (notably social media platforms) allow users to create automated software agents, or “bots,” that automate various actions a user account can take, such as posting or boosting content (re-tweeting, for example). There are many innocuous or even positive uses of bots—for instance, artists use Twitter bots for the purpose of parody. There are also more problematic uses that many companies forbid or discourage, such as when political parties or their surrogates use botnets to promote certain messages or to artificially inflate a candidate’s reach in order to manipulate public discourse and outcomes. On some social media platforms, bots or coordinated networks of bots (“botnets”) can be used to harass users (“brigading”), artificially amplify certain pieces of content (mass retweeting, etc), and otherwise distort public discourse on the platform. Some experts have called for companies to require users who use bots to explicitly label them as bots, in order to help detect such distortions.

Companies that operate platforms that allow bots therefore should have clear policies governing the use of bots on their platforms. They should disclose whether they require content and accounts that are produced, disseminated or operated with the assistance of a bot to be labelled as such. They should also clarify their process for enforcing their bot policies and publish data on the volume and nature of content and accounts that are restricted for violating these rules.

Potential sources:

  • Platform policies for developers
  • Automation or bot rules
  • Transparency reports
No Comments

Post A Comment

Sign up for the RADAR

Subscribe to our newsletter to stay in touch!