Much has happened since my last blog post of early July, announcing publication of our draft criteria and launch of case study research. Here is an overview:
We are deeply grateful to the MacArthur Foundation for a substantial 2-year investment in our work for 2014 and 2015. While the grant does not cover all project costs, MacArthur’s commitment lays the base to continue our work through the next two years in some way or another. The ultimate scope of the ranking, how much engagement we are able to conduct, and how well it is promoted among other things, depends on the extent to which we can raise other funds. Please also see our lists of 2013 funders and project partners without whose support and commitment we would never have gotten to this point.
Case Study Research and Methodology Development:
In July we launched case studies focused on Internet companies in India, Russia, China, and the U.S. In August and September we began the process of developing and launching case studies focused on telecommunications companies, including: a comparative case study looking at Deutsche Telekom in Germany and its subsidiary T-mobile Hungary; a comparative study of several telcos operating in Brazil, with an added examination of Telefonica based in Spain, whose subsidiaries include Vivo in Brazil; an examination of two Indian telcos Bharti Airtel and BSNL. A U.S. telco case study is also getting off the ground and we are examining whether further case studies are needed on other European telcos as part of this methodology-development stage. The purpose of the case study research is to test out the draft criteria on selected companies in several different types of jurisdictions, in order to gain a better understanding of how the draft criteria play out across a range of companies in a range of jurisdictions. Most importantly we hope to be able to answer to the following questions:
- What criteria should apply to all companies everywhere regardless of size, maturity, or jurisdiction? (In other words what things do we believe that no company anywhere has an excuse not to be doing – even in hostile legal and political environments?)
- What criteria can only reasonably be met after company has reached a certain size?
- What criteria can only be met under certain legal/political/regulatory conditions?
- How should we focus the methodology – what is most important and urgent to measure companies on and what is better addressed by government-focused advocacy? (The answer to this will be informed heavily by the answers to the first three questions.)
- What can actually be meaningfully measured and compared and what simply cannot?
- On what basis should we select the companies we will evaluate in Phase 1?
- In particular countries and parts of the industry, what impact is such a ranking likely to have on company behavior?
It is too early to report on the final results or conclusions as research is still ongoing, but we can say that most researchers are telling us that in the next iteration, the criteria should be more streamlined and focused in order to be effective. That means we won’t be able to include everything on everybody’s laundry list of what an ideal company ought to be doing on all fronts; we will have to prioritize and focus on what is most urgent, important, and measurable.
We are also learning important things about the relationship between what companies say and commit to publicly, what they actually do in practice, and what their practices mean concretely for users’ free expression and privacy. While there is clearly a relationship between companies’ statements and public commitments and practice, that relationship is different in different contexts. Companies that make similar commitments can still have a wide range of actual practice that plays out with users in a range of different ways. We need to make sure our methodology reflects a nuanced understanding of this reality, and emphasizes the right things.
If all goes as planned we hope to have final drafts of the case study write-ups completed in December. The goal is to publish them in late January or early February, alongside a final draft of our proposed ranking methodology which will be the product of what we learned from our case study research. The methodology will then be subjected to a period of public consultation and stakeholder engagement in early 2014. Funds permitting we hope to then make a final revision and begin applying it in Q2 2014 with an aim to produce our first Phase 1 ranking report in Q4 2014. Please see the project timeline for more details.
Company engagement:
The case study research includes company interviews. We have encountered a range of reactions from companies: from highly enthusiastic, to curious, to neutral but willing to talk, to negative, to hostile, to indifferent, to radio silence. On the positive end of the spectrum, we have had some truly enlightening and energizing meetings and calls. We have learned a tremendous amount from companies that have been willing to talk with us about the specifics of the draft criteria in the context of their own products, services, and operations. We continue to reach out to other companies with the message that if we can talk to them now during the case study research phase, the final draft methodology – and ultimately the ranking – is more likely to take their concerns and perspectives on board. We have developed an FAQ for companies.
Civil society engagement:
It is vital that our ranking should promote best practices by ICT companies that will better minimize digital threats to human rights defenders and journalists under threat. One of the things we intend to do this Fall is to revise our human rights risks scenarios (which haven’t been revised much since April) through consultation with groups working with human rights defenders and journalists, and to make sure that these scenarios play a strong role in the methodology development process.
Also note that we have been actively reaching out to all the organizations and projects that we know of whose work is remotely related to supporting the emergence of better standards and practices around digital rights, to maximize synergies and minimize duplication.
Investor engagement:
We are exploring whether and how we might work with an organization that provides research data to responsible investors on environmental, social, human rights, and governance factors.
Academic relationships:
Our case study research network includes academics from around the world. Our partnership with the University of Pennsylvania has yielded a course on “Human Rights, Corporate Responsibility and ICT” at Penn Law, two public events, and a workshop. If you are in or near Philadelphia on October 17th please join us for an event called “Scholarship After Snowden. This project is committed to supporting our academic partners – whose research contributions are so vital – in their efforts to beef up teaching and research on issues related to digital rights.
Staffing
I remain the only full-time staff member of the project, so you can blame me for all of the project’s shortcomings. Click here to see the long list of people without whom the project would be nothing and nowhere. While we still cannot afford to offer a one-year full-time contract to a person with the kind of experience we need, we do plan to hire a 6-month consultant, ideally based in Europe, to support methodology development and stakeholder engagement. The job description is posted on the New America Foundation website.