Image created with MidJourney
Soon Ranking Digital Rights will release the Generative AI Accountability Scorecard, evaluating major consumer-facing generative AI services’ respect for the human rights to privacy, non-discrimination, freedom of expression, and freedom of information. Today, we are sharing a consultation draft of the indicators we will use to evaluate companies for the GAIA Scorecard. But, to ensure they are credible and effective, we need your help! If you’re an expert, or if you have knowledge of these technologies and their related risks to human rights, we’re inviting you to participate in the fast and flexible feedback process. All participants will be credited in the final report unless they prefer to remain anonymous.
Read the draft indicators and give feedback!
The draft indicators are based on preliminary standards we shared along with a detailed rationale and Q&A about the project, in June 2023. The project will use RDR’s ten years of experience ranking tech companies to help address the human rights risks of generative AI, including “turbocharged information manipulation,” bias, non-consensual pornography, fraud, and incentives for continued privacy violations.
Whether or not you wish to provide feedback on the standards, experts and stakeholders are invited to join RDR’s discussion mailing list about civil society and academic projects to evaluate the policies and transparency of generative AI services. To join, send a message request to methodology@rankingdigitalrights.org.