Photo by Maurizio Pesce via Wikimedia Commons, CC BY 2.0

As everyday consumer appliances and devices like televisions are increasingly connected to the internet, concerns about privacy and security are mounting. Adding to growing consumer anxiety about the implications of bringing internet-connected appliances into our homes, on February 7th Consumer Reports reported that certain TV models sold by Samsung and TCL are vulnerable to hackers. The assessment, conducted in collaboration with Ranking Digital Rights (RDR) and Disconnect—a company that makes digital tools for preventing privacy invasions—revealed that security vulnerabilities in two of the five TV brands tested, Samsung and TCL, could allow a hacker to remotely take control of the TV.

Researchers also found that all “smart” or internet-connected TVs examined collect large amounts of information, which they send back to the TV manufacturers, software providers, and various third parties that deliver content, process payments and warranty claims, and provide marketing services. And yet, users do not always have the ability to control or minimize such data collection without losing the features of their TV that make them “smart” in the first place, and that enable streaming or searching for content on various apps such as Netflix and YouTube.  

These unsettling findings are the first published results of an ongoing collaborative research and testing project that uses the Digital Standard to evaluate internet-connected products that make up what is often called the “internet of things.” The Standard is an essential list of privacy and security criteria to assess smart devices, services and apps, developed in partnership with leading privacy, security, and human rights organizations, including Ranking Digital Rights. The goal is to encourage technology companies to prioritize consumers’ security and privacy needs, and to help consumers make informed choices.

Many of the privacy and security criteria included in the Digital Standard are either directly borrowed or adapted from RDR’s Corporate Accountability Index methodology. While RDR’s 35 indicators were developed to evaluate internet, mobile, and telecommunications companies, with some adaptation the methodology is proving to be equally suitable for assessing networked devices and services such as smart TVs. As part of the collaborative research and testing effort led by Consumer Reports, other types of networked devices and applications are also being evaluated against the Digital Standard. Thus, while the RDR Corporate Accountability Index focuses on 22 internet, mobile and telecommunications companies, the Digital Standard project demonstrates how the core principles underlying RDR’s methodology can be used to evaluate many more companies and product types across the information and communication technology (ICT) sector.

The RDR indicators incorporated into the Digital Standard criteria focus on corporate disclosure of policies and practices around data collection and control, data use and sharing, and privacy and security oversight, among other issues. Collectively, these indicators have contributed to Consumer Reports’ findings about the disturbing amount of data that TVs collect when connected to the internet. These data can include log information, device information, location information, as well as viewing information about the content users watch, which can be combined and shared for targeted advertising on TVs and other platforms with significant implications for privacy and security.

More importantly, the findings reported this month by Consumer Reports highlight once again the importance of assessment tools such as RDR’s Index and the Digital Standard. Both provide companies with a roadmap to follow for establishing basic privacy and security standards. They also provide consumers with clear guidance for what they should be looking for in choosing internet-connected products. Furthermore, such evidence-based findings about privacy weaknesses and security vulnerabilities can be leveraged by advocacy organizations, shareholders, and users to demand more accountability from companies. They can also inform the work of policymakers as products from a growing number of industries get connected to the internet.

Corporate Accountability News Highlights is a regular series by Ranking Digital Rights highlighting key news related to tech companies, freedom of expression, and privacy issues around the world.

Journalists urged to quit iCloud China 

Apple store in Shanghai, China. Photo by myuibe [CC BY 2.0], via Wikimedia Commons

Reporters Without Borders is urging journalists and bloggers to quit Apple iCloud China as control over the service is set to be transferred to a local host with close ties to the Chinese government. The press freedom watchdog voiced concerns that the transition will pose a threat to the security of journalists and their personal data, urging them to stop using iCloud China or to change their geographic region.

Apple is making the migration to comply with new regulations which require cloud services to be operated by Chinese companies and user data stored locally. Starting from February 28, Guizhou-Cloud Big Data (GCBD), a company owned by the local Guizhou provincial government, will be operating iCloud in mainland China. Although Apple said that it had strong data privacy and security protections in place, and “no backdoors will be created into any of [their] systems,” GCBD will still have access to all user data according to a newly added clause to the iCloud China user agreement. This has raised concerns that the Chinese government will be able to easily spy on users.

Companies should conduct regular, comprehensive human rights risk assessments that evaluate how laws affect freedom of expression and privacy in the jurisdictions in which they operate as well as assessments of freedom of expression and privacy risks when entering new markets or launching new products. Companies should also seek ways to mitigate risks posed by those impacts. The 2017 Corporate Accountability Index found that Apple did not disclose if it conducted these types of assessments. However Apple recently published a new “Privacy Governance” policy stating that it conducts privacy-related impact assessments, although it does not disclose if its due diligence process includes evaluating freedom of expression risks.

(more…)

Corporate Accountability News Highlights is a regular series by Ranking Digital Rights highlighting key news related to tech companies, freedom of expression, and privacy issues around the world.

China shuts down Weibo services for a week

Images remixed by Oiwan Lam (CC BY 2.0)

The Chinese government has ordered the micro-blogging platform Sina Weibo to shut down services over objectionable content for a week. On January 27, the Cyberspace Administration of China, the country’s internet regulator, complained about ‘’vulgar and pronographic content’’ to a Weibo executive, and ordered the Chinese platform to shut down several portals including its portal on celebrities and hot searches site. The regulator also denounced content that discriminates against minorities and contradicts China’s ‘’social values.’’

Weibo is one of the most popular social media platforms in China with more than 300 million monthly active users. The Chinese government implements strict internet censorship policies. Popular non-Chinese services and platforms like Facebook, Twitter, and Youtube are banned, while Chinese services such as Weibo, the instant messaging app WeChat, and the Baidu search engine operate under tight regulations that require them to monitor and take down objectionable content.

Internet, mobile, and telecommunications companies should be transparent about how they handle government requests for content restrictions and publish transparency reports on such requests that include data on the number of requests received, the number they complied with, the types of subject matter associated with these requests. Most companies evaluated in the 2017 Corporate Accountability Index lacked transparency about how they handle government requests to restrict content or accounts, and did not disclose sufficient data about the number of requests they received or complied with, or which authorities made these requests.   

(more…)

Special rapporteur David Kaye. Source: un.org

As heated controversies and debates continue to rage about how governments, companies, and citizens should respond to the problem of disinformation and hate speech on social media, a forthcoming report on Content Regulation in the Digital Age by David Kaye, U.N. Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, could not be more timely and important. As Kaye described it in his call for submissions to help inform the report, his aim is to examine the “impact of platform content regulation on freedom of expression and examine appropriate private company standards and processes and the role that States should play in promoting and protecting freedom of opinion and expression online.”

RDR submitted a paper (downloadable as PDF here) tying key findings and recommendations from the 2017 Corporate Accountability Index to many of the questions that Kaye aims to examine in his June 2018 report. The 2017 Index findings highlighted how, despite the important role that internet platforms such as social media and search services play in mediating public discourse, and despite recent progress by some companies in disclosing policies and actions related to government requests, the process of policing content on internet platforms remains unacceptably opaque. As a result we found that:

  • Users of internet platforms cannot adequately understand how their online information environment is being governed and shaped, by whom, under what authorities, for what reason. When transparency around the policing of online speech is inadequate, people do not know who to hold accountable when infringements of their expression rights occur.
  • This situation is exacerbated by the fact that some of the world’s most powerful internet platforms do not conduct systematic impact assessments of how their terms of service policies and enforcement mechanisms affect users’ rights.
  • Furthermore, grievance and remedy mechanisms for users to report and obtain redress when their expression rights are infringed are woefully inadequate.

In light of these facts, we proposed the following recommendations for companies and governments:

1. Increase transparency of how laws governing online content are enforced via internet intermediaries and how decisions to restrict content are being made and carried out. Companies should disclose policies for decision making regarding content restrictions, whether at the request of governments, private actors, or carried out at the company’s own initiative to enforce its terms of service. They should also disclose data on the volume and nature of content being restricted or removed for the full range of reasons that result in restriction. Governments must encourage if not require such transparency and match it with transparency of their own regarding demands – direct as well as indirect – that they place upon companies to restrict content.

2.  Broaden impact assessment and human rights due diligence in relation to the regulation and private policing of content. Companies must conduct human rights impact assessments that examine policies and mechanisms for identifying and restricting content, including terms of service enforcement and private flagging mechanisms. They must disclose how such assessments are used identify and mitigate any negative impact on freedom of expression that may be caused by these policies and mechanisms. Governments should also assess existing and proposed laws regulating content on internet platforms to ensure that they do not result in increased infringement of users’ freedom of expression rights.

3. Establish and support effective grievance and remedy mechanisms to address infringements of internet users’ freedom of expression rights. When content is erroneously removed or a law or policy is misinterpreted in a manner that results in the censorship of speech that should be protected under international human rights law, effective grievance and remedy mechanisms are essential to mitigating harm. Adequate mechanisms are presently lacking on the world’s largest and most powerful internet platforms. Governments seeking increased policing of extremist and violent content by platforms should not only support but participate in the development of effective grievance and remedy mechanisms.

Click here to download the full submission.

Corporate Accountability News Highlights is a regular series by Ranking Digital Rights highlighting key news related to tech companies, freedom of expression, and privacy issues around the world.

U.S. Supreme Court to hear Microsoft email case  

The U.S. Supreme Court Building (Photo by Joe Ravi, licensed CC-BY-SA 3.0)

On February 27, the U.S. Supreme Court is set to hear a landmark case in which the U.S. Department of Justice is seeking to force Microsoft to hand over content of emails stored in a data center in Ireland. The case dates back to 2013 when a New York state judge issued a warrant requesting that Microsoft hand over Outlook email information belonging to a user, who was the subject of a drug-trafficking investigation. While the company agreed to hand over metadata stored in the U.S., it refused to hand over the content of the emails, which are stored in Ireland. If the Supreme Court rules in favor of the U.S. government, it would set a new precedent allowing governments to obtain data stored in other countries.

In July 2016, the U.S. Second Circuit Court of Appeals ruled in favour of Microsoft and quashed the U.S. government search warrant. The U.S. government sought the emails’ content unilaterally, under the Stored Communications Act, instead of submitting a Mutual Legal Assistance Treaty (MLAT) request to Ireland. MLATs are bilateral, multilateral or regional agreements that allow governments to exchange information related to an investigation.  

The U.S. government argues that using an MLAT process is “costly, cumbersome and time-consuming,” and is not needed since “the privacy intrusion occurs only when Microsoft turns over the content to the Government, which occurs in the United States.”

Microsoft argues that the emails stored in its data center in Dublin are protected by Irish and EU privacy laws, and the U.S. government should try to obtain the sought-after information using the United States-Ireland MLAT process. More than 250 signatories including leading tech companies, members of Congress, European lawmakers and advocacy groups signed 23 amicus briefs in support of Microsoft.

Companies should disclose information about their process for responding to government requests for user data including their processes for responding to non-judicial government requests and court orders, and the legal basis under which they comply with requests. In addition, companies should publicly commit to push back on inappropriate or overbroad government requests. Companies should also disclose and regularly publish data about these requests including, listing the number of requests received by country and number of accounts and pieces of content affected, and specifying the legal authorities making the requests.

(more…)