1. The company lists the number of accounts affected.
2. The company lists the number of pieces of content or URLs restricted.
3. The company lists the types of content restricted during the reporting period (e.g., hate speech, harassment, incitement to violence, sexually explicit content, etc.).
4. The company provides examples of why it took action in different types of cases.
5. The company reports this data at least once a year.
6. The data reported by the company can be exported as a structured data file.
Guidance: Companies may employ staff to review content and/or user activity or they may rely on community flagging mechanisms through which other users flag content and/or activity for company review. This indicator seeks company disclosure of data on the number of instances a company has removed content or restricted users’ access due to violations of the company’s terms of service. Publicizing this data will provide the public with a more accurate view of the content removal ecosystem as well as companies’ own role in content removal. We expect companies to regularly publish data about their own decisions to remove content.
Evaluation: This indicator is scored using a checklist, meaning companies can only receive full credit if their disclosure meets all elements in the checklist.
Potential sources:
For more information, click here for a glossary of terms.