Ranking Digital Rights: April 4 Workshop Summary
(With thanks to Jon Diamond and Julia Graber for note-taking and draft summaries.)
On April 4th the Ranking Digital Rights project convened a day-long invitation-only workshop to inform the drafting of the project’s Phase 1 criteria for ranking Internet and telecommunications companies on their practices and policies related to free expression and privacy. The meeting would also help to determine priorities, scope, and focus of research needed to refine and improve the ranking criteria and methodology. This research process will enable the team to produce a final draft of the Phase 1 methodology by the end of the year.
Invited participants included: University of Pennsylvania faculty advisors; graduate and undergraduate students involved in the research; international research partners from Brazil, China, India, Russia, the United Kingdom and elsewhere; human rights advocates; technologists; socially responsible investors; experts on best practices in corporate ranking and rating systems; experts in the field of business and human rights, business ethics, and corporate social responsibility.
To encourage maximum frankness by all participants the meeting was conducted under the Chatham House Rule. Participants discussed a range of complex issues and questions that had arisen during the course of developing an initial list of possible rankings criteria elements. These questions included:
- How do we learn from the experience of other corporate rankings projects, so that we can build on best practice and avoid mistakes of others?
- What are the lessons learned thus far by the Global Network Initiative (GNI) about what free expression and privacy criteria should be applied to Internet companies?
- What are the unique challenges in creating standards for, evaluating and comparing the policies and practices of telecommunications companies, around whom there is much less experience and learning as compared to Internet companies via the GNI?
- How detailed should our criteria be in evaluating specific elements of privacy policies, terms of service, and the deployment (or lack of) specific technical standards?
- How do we ensure that this project – which focuses on companies as its unit of evaluation – coordinates and maximizes its compatibility (from both a research and advocacy standpoint) with projects recently launched by several academic and advocacy organizations to evaluate Internet openness, privacy, and other values using the nation-state as the unit of evaluation?
- How do we ensure that our data is not only open, but also interoperable with that of other related projects?
- Given limited resources what are the most urgent research questions that need to be answered?
Lessons from Other Rankings Initiatives
Rankings, ratings and indexes have become a common instrument for holding companies accountable to their human rights and environmental responsibilities in other industries on other issue areas. Drawing on a range of well-developed corporate transparency and accountability ratings systems, workshop participants discussed best practices and challenges specific to the task of rating ICT companies, in particular. From this discussion emerged several key conclusions:
1. Complexity does not necessarily imply robustness. There is power in simplicity. A strong methodology does not seek to cover every single possible detail that researchers can identify. Instead it focuses on the most important issues that define excellence and which are most material to the ranking’s audience. In addition to a well-reasoned methodology, clear and simple output has the advantage of making the ratings more accessible to target audiences.
2. Ratings should be tied to a credible business case. Although the socially responsible investment (SRI) community represents an important target audience, ordinary investors also stand to play a role in influencing companies. While the former is likely to identify with the values of the ratings, the latter is much more interested in value—though some non-SRI investors do integrate environmental, social, and corporate governance (ESG) issues in their own assessments. Maximizing leverage over companies may mean that ordinary investors need to be targeted in addition to socially responsible investors; this can be best accomplished by demonstrating the business sense behind sound digital rights policy and practice.
3. It is important to distinguish between companies’ commitments and their performance and to measure both over time. In particular, companies should have policy commitments at the highest levels and actively assess human rights risks in accordance with the UN Guiding Principles on Business and Human Rights.
4. Company engagement is key. On the one hand, company engagement improves the legitimacy of ratings. On the other, engagement provides opportunities to help companies improve their practices. One participant with direct experience in corporate accountability ratings offered the anecdote of companies calling for his organization’s rankings’ release to be delayed so that the companies could make needed improvements. Having engaged with these companies, the participant’s organization was able to cooperate with their requests and ultimately to further the rankings’ goals in a shorter period of time than initially expected.
5. Leadership, credibility and technical excellence are all vital. A ranking system’s success requires strong forward-thinking vision on how “excellence” should be defined in relation to the subject of the ranking. Credibility of the ranking is achieved not only through solid research in constructing the methodology to determine its real-world relevance, but also through continuous consultation and engagement with all stakeholders who are likely to use the rankings as well as with companies that will be subject to ranking. Technical excellence in terms of data collection, analysis, and presentation are also vital to the project’s success.
The Experience of the Global Network Initiative
Arising out of digital rights issues in China, the Global Network Initiative (GNI) was established as a sort of “peace agreement” among ICT companies, investors, human rights groups and academics. The GNI’s Principles on free expression and privacy (published in 2008) is one of the key documents on which the Ranking Digital Rights criteria will be constructed, in addition to the UN Guiding Principles on Business and Human Rights (published in 2011). Although the GNI’s company membership remains somewhat limited (Google, Microsoft, Yahoo!, Websense, Evoca, and now Facebook), participants noted that its principles are already becoming a global standard: they are regularly invoked by human rights groups and governments and are even used by companies outside of the GNI itself.
While the purpose of the GNI is to engage with companies and to provide an independent assessment mechanism to verify whether member companies are actually living up to their commitment to the GNI principles and accompanying Implementation Guidelines, the organization only assesses those companies that choose to join. Thus the vast majority of companies in the ICT sector remain un-assessed and un-measured when it comes to their policies and practices related to free expression and privacy. Participants also noted that the GNI’s principles and implementation guidelines are broader than the proposed criteria for the project under discussion – due in no small part to the fact that the GNI documents were drafted in negotiation with the companies. One participant also noted that the GNI’s governance structure, with companies sitting on the multi-stakeholder organization’s Board of Directors, means that it is at times less transparent than perhaps it should be.
It is thus the goal of this project to fill the gap between organizations like the GNI on the one hand—which, being voluntary on the part of companies, are often less stringent in their demands—and proposed regulatory legislation such as the Global Online Freedom Act (GOFA) on the other. In addition, whereas the GNI focuses primarily on corporate policies and practices related to government requests, Ranking Digital Rights intends to cover other company-initiated policies and actions affecting users’ and customers’ human rights but that are not caused by government demands. Thus the project aims both to build upon the progress made by the GNI and to fill gaps between cooperative multi-stakeholder arrangements on the one hand and inflexible, reactive government regulation on the other.
Challenges of Telecommunications Companies
While GNI has focused primarily on Internet companies due to the fact that only Internet companies and no telecommunications companies have joined to date, in its Phase 1 criteria, Ranking Digital Rights intends to focus equally on the responsibilities of Internet and telecommunications companies in the sphere of digital rights. Especially with the convergence of telephony and data communications, the growing value of “big data,” and the ever-expanding number of telecommunications customers both public and private, telecommunications companies hold considerable sway over users’ rights to free speech and privacy in particular.
Ranking global telecommunications companies, however, presents a number of challenges: they provide an expanding variety of different kinds of services; they often have joint ventures and subsidiaries across a range of very different jurisdictions; the structure of licensing agreements and thus company policies related to government demands as well as terms of service can vary sharply across different jurisdictions; government relationships are very complicated given that domestic telcos in many countries are currently or formerly state-owned; and unlike Internet companies telcos have a lot of personnel, physical plant and equipment in the countries where they operate – making it difficult to pull out quickly when suddenly caught in the middle of government human rights violations or conflict situations.
Companies vary with respect to the way they handle government requests, whether the requests are targeted or maximal. In cases where government requests are made for targeted specific data of particular customers, companies may be more easily held to account for the ways in which they handle these requests. But for companies which have maximal requests—or real-time government interception capabilities—built in to their licensing agreements with governments, this evaluation is much harder to make. In light of this challenge some participants suggested that rather than examining the policies of these latter companies, it may be more revealing to consider the companies’ risk assessment practices when entering new markets. Do telecommunications companies carry out the necessary due diligence before signing agreements with governments with poor human rights records? Do they adhere to the UN Guiding Principles on Business and Human Rights to attempt to mitigate human rights risks or else withdraw from the market? These are some of the questions that must be asked.
Some participants suggested a separate set of rankings for telecommunications companies, or perhaps delaying consideration of these companies until after the first year or so of rankings. Ultimately, the discussion over telecommunications companies tied in closely to the broader question of the extent to which all ICT companies can be held to the same standards. More research on these companies will be necessary to evaluate that question in particular.
Expanding the discussion from ICT companies to the jurisdictions in which they operate, workshop participants commented on the challenges of applying the same criteria across a wide diversity of legal systems. The discussion centered broadly around the question of what allowances, if any, should be made for companies operating in authoritarian or semi-authoritarian jurisdictions, where proper respect for digital rights would be illegal.
On a fundamental level, countries differ on whether and to what extent their constitutions guarantee rights to privacy and free speech – and also differ on the extent to which these guarantees are upheld or can be defended in court. Countries differ also in the specificity and scope of their data protection as well as digital surveillance-related laws—if they have any at all. Companies often argue that they have no choice but to comply with government removal and data requests in many jurisdiction with authoritarian governments and weak rule of law. Only in more liberal jurisdictions do they have the option to challenge these requests on a firm legal footing. Should companies, therefore, be held to a universal set of standards, when their operating environments differ so radically? As one participant noted, the project should seek to avoid an outcome in which companies based in more liberal jurisdictions inevitably score higher than those in illiberal jurisdictions—in which case the rankings would reveal more about jurisdictions than the companies themselves.
That said, another participant argued that the rankings must not be too relativistic. One of the rankings’ goals is to establish a global standard for ICT companies based on the UN Guiding Principles, the GNI Principles, and other principles based on international – and universal – human rights law in the context of information and communications technology. If the rankings make too many allowances for the various legal regimes under which companies operate, the point of universal human rights would be lost. How to establish a global standard while also accounting for companies particular legal challenges thus remains one of the most important questions to be resolved with respect to the project’s methodology.
While no definitive conclusion was drawn, consensus in the room leaned toward relying upon a universal standard of human rights.
There was a debate among technologists in the room about how specific the criteria ought to be when it comes to specifying how users’ privacy and security should be protected at the technical level. One technologist argued that in order to avoid failing to keep pace with developments in security technologies (as well as vulnerabilities) the criteria should not get into too much technical detail. With encryption, for example, rather than specify specific standards they argued that we should simply ask whether a company has deployed the latest version of encryption for whatever the relevant encryption standard is for the particular function at hand.
Other human rights considerations
A further dimension to companies’ international operations involves their observance of international technology sanctions and export controls imposed primarily by North American and European governments. In general, one participant pointed out, many U.S. and European-based companies tend to withhold goods and services from countries under sanctions or export controls for fear of violating these restrictions—even if their goods and services are not actually restricted. In fact, the goods and services that companies provide oftentimes empower local populations (including any opposition movements), and therefore the provision of these should be encouraged or at least not discouraged. Companies might be evaluated for how they manage the complexities of sanctions and export regimes as well as how they deal with conflict situations, e.g. the current war in Syria. Another participant emphasized the importance of developing a criteria that will cover practices and policies of companies operating in, providing services in, or selling products to places where armed conflict and/or genocide are taking place or have a strong possibility of occurring.
Rankings vs. Index
This conversation on the rankings’ international scope informed a broader discussion on whether the project should aim to produce rankings or perhaps something more along the lines of an index. In a consideration of data formatting and compatibility with other projects, workshop participants commented on the possibility of creating a matrix of companies and jurisdictions as well as building in tools to adjust the weights of various sub-scores according to consumers’ interests. Several participants were enthusiastic about the prospects for an index capable of comparing companies across a wide range of jurisdictions and varying the weights assigned to the assessment criteria.
Transparency, in particular, was a recurring theme throughout the workshop. Although this particular discussion centered around the technical tools that can be used to facilitate transparency in data aggregation, there was a broad consensus that the project should be as transparent as possible with respect to the sources of its data and how it uses the data to score companies.
Inter-operability and compatibility with related projects
A number of academic institutions, think tanks, and advocacy organizations are in various stages of developing projects that measure a range of different factors affecting free expression and privacy on the global Internet. Most of these focus on countries or geographically-based networks as their unit of measurement. These include: technical monitoring of censorship and bandwith throttling/shaping on specific networks around the world; the tracking and comparison of laws affecting free expression and privacy; the tracking of Internet accessibility and openness in countries around the world, etc. It was pointed out that many of these projects are likely to produce data that will help to inform the Ranking Digital Rights project, and that compatibility among projects in terms of data formats will help to maximize inter-operability and synergies among the different projects. Participants suggested establishing regular channels of communication and coordination among related projects—perhaps in the form of a mailing list—in order to reduce overlap and maximize potential for collaboration.
One of the questions raised was how to translate rankings into action on the part of companies and governments. The group concluded that rankings can empower constituencies to create change within companies, and that previous projects such as the EFF’s “Who Has Your Back?” have proven that an annual report structure has been effective in furthering policy changes at least of certain U.S. companies. Ultimately, participants agreed that the rankings should not be a “one-off,” but an ongoing effort capable of tracking policies and practices over many years.
The suggestions, expertise, and ideas offered during the course of the day have contributed directly to the formulation of the Draft Phase 1 Criteria for Internet and Telecommunications Companies which will be tested by case study research teams throughout the Summer and Fall of 2013.