Using Data to Support Social Movements: What the Research Says

Lauren Kogen is an Assistant Professor in the Department of Media Studies and Production at Temple University in Philadelphia. Her research focuses on communication for social change and on how social change issues are framed in the news. You can read more about her on her website.

Share Article

As a researcher at Temple University in Philadelphia, I study how stories about social problems and their possible solutions make their way into the mainstream media, how they get covered, and how audiences respond to them. One key question I look at is: How do we get news audiences to understand social issues, care about them, and become informed about potential policy solutions to very complex problems?

Given that part of RDR’s mission is to influence corporations by having their behavior around privacy and freedom of expression covered in the media, I was thrilled when I was hired to conduct an evaluation of the RDR Index four years ago. This was an opportunity to dig into research questions around how the news can work to promote more comprehensive coverage of issues related to social justice.

As part of this evaluation, I interviewed 14 civil society organizations about their use of the RDR Index, what they saw as its strengths and weaknesses, and how both it and other human-rights-related rankings could help push forward social movements and bring social issues into the media and into public conversation. In other words, I wanted to know: How do we take numbers, numerical rankings, and indicators (things that might strike some as academic, esoteric, or just plain boring) and turn them into something that can impassion the public and spur social change?

Through these interviews, I came to a few conclusions that I’ll share below. If you want to read my full write-up of this, you can do so. But in short:

  1. Indices like RDR’s offer three critical resources for activists trying to get their narratives into the mainstream media: legitimate information, newsworthy information, and flexible information.
  2. Activists find it really hard to use data from these indices (but we can fix that!).

Legitimate information refers to the idea that journalists typically need their sources of information to be seen as objective and reliable. This poses a problem for social movement actors who are often perceived as biased since they have a clear stance on current societal problems and are advocating for a particular course of political action. Index data—if based on a rigorous, transparent methodology and created by organizations seen as credible, such as RDR—can give activists trying to get their stories into the media an increased perception of objectivity and legitimacy.

Newsworthy information refers to the potential overlap between stories that civil society organizations (CSOs) want to get into the public sphere and stories that news outlets are interested in publishing. News organizations do need and want evidence that social problems exist, especially if evidence can point to actors or organizations who are misbehaving. But the investigative reporting needed to uncover this information can be costly for news outlets. The public still values investigative journalism but shrinking newsroom budgets mean the total number of issues investigated is declining. Activists bringing this evidence to journalists is therefore a win-win for both sides: Activists can combine their understanding of the issue with numerical data and analysis and journalists can shine a spotlight on how particular actors are negatively impacting human rights.

Flexible information refers to the idea that numbers do not represent a black and white version of truth or an incontrovertible version of reality. Numbers, including rankings, are simply descriptive. They describe, or “indicate,” how a particular organization performs on a particular metric. They are in many ways meaningless  until someone explains their significance and ties them together to tell a story.

The indicators can thus tell a variety of stories. For example, within the broad category of privacy, the methodology includes 23 indicators that, when taken together, produce a score (which is itself another indicator) regarding how well corporations adhere to human rights principles related to privacy. The scores for each of these 23 indicators (e.g., “Sharing of User Information”) are calculated by aggregating between one and 12 “sub-indicators,” which are called elements (e.g., whether the company discloses sharing information with governments). Altogether, the Big Tech Scorecard is made up of 58 indicators (each with multiple elements), across 14 companies and 43 specific services, resulting in tens of thousands of data points overall for each iteration of the Index.

A variety of activist interests, including privacy rights, freedom of expression, children’s rights, and democracy promotion, can find indicators and sets of indicators within the dataset that help them tell a particular human rights story. For example, while conducting my evaluation, I spoke to one interviewee working on democracy promotion who suggested that the data could potentially be used to tell the story of how a particular company’s score has dropped because it’s gotten cozier with an authoritarian regime and changed its policies accordingly. How any one group uses the data would depend upon the particular political moment, the news environment, the agenda of the organization, and the indicator scores.

The problem? As I just mentioned, the Scorecard produces tens of thousands of data points in each iteration. The organizations I spoke with wanted to use this information, or at least suggested they did, but many were lost on how to do so. These organizations are often small, with an overworked staff, each wearing multiple hats. They can’t possibly also be expected to become sophisticated data analysts. Many therefore wanted RDR to present the data in a different way, or parse the data for them, to help them tell their own stories.

This creates the following paradox: On one hand, the fact that the RDR Index has hundreds of thousands of data points is great because it means there are endless ways to use the information depending on an organization’s goals. In addition, this data provides mountains of rigorous evidence that assists in advancing policy arguments. But at the same time, the perception is that organizations can’t employ the data themselves, which limits the scope of its current usage.

RDR will never be able to produce all the stories that CSOs want, and especially not at the moment they’re needed. Activists are the ones with their fingers on the pulse of what is happening in their areas of expertise and therefore are best placed to know what stories need to come out, when, how, and where. They are best placed to navigate the news and information space, supply journalists with needed information, or respond to news events.

So what is needed now is a way for activists to use the data themselves. The fact that indices are composed of numbers should not make them impenetrable. Contrary to some data skeptics, using indicators does not always require advanced mathematical skills; it is often only a matter of understanding what indicators mean. In the case of the Big Tech Scorecard, for instance, “analysis” might simply mean looking at the scores for a particular indicator of interest (e.g., “Does the company notify users when terms of service change?”) and comparing scores across companies to see who performed the best or worst. With the implementation of such user-friendly suggestions, analyzing the dataset shouldn’t be too hard.

RDR has begun addressing this limitation. The organization has been working to meet civil society and other stakeholders halfway, which has translated into an expansion of RDR’s policy advocacy, investor engagement, and of its guidance for organizations on how best to employ the methodology and standards to highlight the issues they care about, across countries and regions. It will also be launching a new Research Lab which will include trainings to help CSOs learn to navigate the data to fit their own needs. It remains to be seen how this new effort will go, but it is crucial in order to maximize the value of the Index for activists and social movements.

I believe that activists who know their issues best should be writing and informing media stories; it is not only up to RDR. Numerical analysis cannot replace the passion or emotion that breathes life into a movement, but anecdotal stories of wrongdoing can be buttressed with reliable data to strong and positive effect. (A useful example of this are the companion essays that RDR now publishes alongside its Scorecards.) Such data can be used to strengthen the sway of policy arguments made both to policymakers and to news organizations. This is a resource that is therefore both sorely needed and deserving of further attention.

 

Highlights

A decade of tech accountability in action

Over the last decade, Ranking Digital Rights has laid the bedrock for corporate accountability in the tech sector by demanding transparency from both Big Tech and Telco Giants.

RDR Series:
Red Card on Digital Rights

A story of control, censorship, and state surveillance during the FIFA World Cup in Qatar

Related Posts

Sign up for the RADAR

Subscribe to our newsletter to stay in touch!