RDR in Slate’s Future Tense blog

Share Article

The following story, written by Priya Kumar, originally appeared in Slate’s Future Tense blog.

When Was the Last Time You Read a Privacy Policy?

Tech companies know that everyone skips the fine print. It’s time for them to change approaches.

At one point last fall, I had 16 tabs open on my Web browser, each displaying the privacy policy of a global Internet or telecommunications company. While conducting research for the Ranking Digital Rights Corporate Accountability Index, I read and re-read each policy, trying to figure out what companies said they did with the vast stores of user information that lived on their servers.

The end result? While the policies were lengthy, they frequently glossed over the amount of information being collected and the ways in which it is used. Sure, Google and Facebook each mentioned in their policies that they collected user information through third-party trackers. But that in no way reflects the scope of the third-party trackers: According to UC–Berkeley researcher Ibrahim Altaweel and his team, Google’s tracking mechanisms cover 85 percent of the most popular websites, and Facebook’s tracking reaches 55 percent of the most popular websites, giving both companies extraordinary visibility into the Web browsing habits of millions of people. So even those who actually read company policies don’t get a full picture of what’s going on, making it difficult to protect their personal information.

Altaweel’s was one of many findings discussed at the Federal Trade Commission’s PrivacyCon on Jan. 14 and the Future of Privacy Forum’s Privacy Papers for Policymakers event the day before. Both events highlighted the disconnect between people’s expectations and corporate practices related to the collection and use of personal information, echoing the findings of the Corporate Accountability Index.

Researchers challenged the “notice-and-choice” approach that drives privacy protection in the United States. They emphasized that yes, corporate practices related to user information need additional transparency—but simply understanding what companies do isn’t enough if people lack meaningful alternatives or mechanisms for recourse.

Historically, the United States has approached privacy as a commercial matter while countries in Europe and elsewhere typically view privacy as a fundamental right of their citizens. Consequently, other countries tend to have data protection laws that companies must follow, while American companies protect people’s privacy by providing notification of the company’s privacy practices (like the privacy policy) and giving people a choice about whether to give their data to the company (like the “I agree” button everyone clicks without thinking).

The problems of this notice-and-choice approach are evident to anyone who uses the Internet: No one reads privacy policies, and no one really feels like he or she is exercising a choice when clicking “I agree.” Furthermore, privacy policies are written in a way that satisfies regulators, not regular people. They lack clear explanations of company practices, and companies can change the policies at any time.

As usage of mobile apps, wearable technology, and smart devices for inside and outside the home continues to rise, the notice-and-choice model will become even more obsolete given that user information flows almost constantly between such devices and company servers. Consider that UC–Berkeley researcher Serge Egelman and his team found that applications on an Android phone issue nearly 100,000 requests to access a person’s sensitive data per day. No one wants to receive a notification for each of those requests. Yet 80 percent of the participants in Egelman’s study would have denied at least one of those permission requests, had they been given the opportunity to do so.

The argument that individuals rationally weigh the pros and cons of giving their data to companies doesn’t reflect reality considering the vast amount of data users generate and the number of companies that access that data. Joseph Turow, a professor at the University of Pennsylvania, calls this the “tradeoff fallacy.” He and his team found that more than half of Americans want control over their personal information, but they feel powerless to exert that control.

Instead, people use other shortcuts when deciding whether to interact with a given product or service online. Carnegie Mellon University researcher Ashwini Rao and her team found that people use websites based on their expectations of what the companies do with user information, rather than what the privacy policies say companies actually do. For example, users expect a banking website to collect financial information, not health information. Yet Rao and her team found that Bank of America’s policy says it collects and shares health information from its registered users, meaning people could be using the website under mistaken assumptions about the privacy of certain pieces of information.

In addition, professors Heather Shoenberger from the University of Oregon and Jasmine McNealy from the University of Florida found that people are more likely to accept a website’s terms if the site has a privacy policy—regardless of what the policy actually says—or if it has appealing design aesthetics. These behaviors suggest that people may be using websites under mistaken assumptions of what they believe companies are doing with their information. Additional transparency could ameliorate the situation, but what’s truly needed is for companies to act in ways that respect users’ privacy.

Consider this: Columbia University professor Roxana Geambasu and her team found evidence that an advertising tool used by Gmail until November 2014 targeted ads based on sensitive personal information—something its policies, both those in place in 2014 and those in place now, say it does not do. (Google refused to comment on Geambasu’s research, but a representative said, “We have an extensive set of policies that guide what ads can be shown, and in Gmail we manually review all ads that are shown.”) And while Geambasu and other researchers emphasize that results like these do not imply that such targeting is intentional, such work does highlight the need for better understanding of how systems like Google’s algorithms determine who sees what information.

Users currently lack adequate frameworks to seek redress if they believe a company’s actions violate their privacy. The Corporate Accountability Index found a dearth of clear processes for remedy among companies. India’s Bharti Airtel scored highest in the index among telecommunications companies on remedy, and South Korea’s Kakao scored highest among Internet companies. Both companies are headquartered in countries that maintain certain legal requirements for remedy, which boosted their performance.

Fordham University law professor Joel Reidenberg suggests that a new international treaty will be needed to protect privacy in a global, networked era. Otherwise we risk allowing governments to gain greater insight into the lives of their citizens, while the ways that governments use such information become more opaque. Indeed, while a significant portion of user information lives on company servers, privacy policies typically state that companies may turn information over to the government, though, of course, details about the circumstances in which they would do so are scant.

 What else can be done? Turow encouraged public interest organizations to examine company policies and highlight how well, or not so well, companies perform. Ranking Digital Rights’ Corporate Accountability Index does just that, by evaluating a group of technology companies according to standards related to freedom of expression and privacy. The index found that, overall, company disclosure about the collection, use, sharing, and retention of user information is poor. But companies can take steps in the right direction by, for example, providing a list of the third parties with which they share user information, as Yahoo does, or specifying how long they retain user information when people delete their accounts, as Twitter does. Change won’t happen overnight, but if the robust conversations at events such as PrivacyCon are any sign, many people are working to achieve that change.

Highlights

A decade of tech accountability in action

Over the last decade, Ranking Digital Rights has laid the bedrock for corporate accountability in the tech sector by demanding transparency from both Big Tech and Telco Giants.

RDR Series:
Red Card on Digital Rights

A story of control, censorship, and state surveillance during the FIFA World Cup in Qatar

Related Posts

Sign up for the RADAR

Subscribe to our newsletter to stay in touch!