Corporate Accountability News Highlights is a regular series by Ranking Digital Rights highlighting key news related to tech companies, freedom of expression, and privacy issues around the world.

Journalists urged to quit iCloud China 

Apple store in Shanghai, China. Photo by myuibe [CC BY 2.0], via Wikimedia Commons

Reporters Without Borders is urging journalists and bloggers to quit Apple iCloud China as control over the service is set to be transferred to a local host with close ties to the Chinese government. The press freedom watchdog voiced concerns that the transition will pose a threat to the security of journalists and their personal data, urging them to stop using iCloud China or to change their geographic region.

Apple is making the migration to comply with new regulations which require cloud services to be operated by Chinese companies and user data stored locally. Starting from February 28, Guizhou-Cloud Big Data (GCBD), a company owned by the local Guizhou provincial government, will be operating iCloud in mainland China. Although Apple said that it had strong data privacy and security protections in place, and “no backdoors will be created into any of [their] systems,” GCBD will still have access to all user data according to a newly added clause to the iCloud China user agreement. This has raised concerns that the Chinese government will be able to easily spy on users.

Companies should conduct regular, comprehensive human rights risk assessments that evaluate how laws affect freedom of expression and privacy in the jurisdictions in which they operate as well as assessments of freedom of expression and privacy risks when entering new markets or launching new products. Companies should also seek ways to mitigate risks posed by those impacts. The 2017 Corporate Accountability Index found that Apple did not disclose if it conducted these types of assessments. However Apple recently published a new “Privacy Governance” policy stating that it conducts privacy-related impact assessments, although it does not disclose if its due diligence process includes evaluating freedom of expression risks.

(more…)

Corporate Accountability News Highlights is a regular series by Ranking Digital Rights highlighting key news related to tech companies, freedom of expression, and privacy issues around the world.

China shuts down Weibo services for a week

Images remixed by Oiwan Lam (CC BY 2.0)

The Chinese government has ordered the micro-blogging platform Sina Weibo to shut down services over objectionable content for a week. On January 27, the Cyberspace Administration of China, the country’s internet regulator, complained about ‘’vulgar and pronographic content’’ to a Weibo executive, and ordered the Chinese platform to shut down several portals including its portal on celebrities and hot searches site. The regulator also denounced content that discriminates against minorities and contradicts China’s ‘’social values.’’

Weibo is one of the most popular social media platforms in China with more than 300 million monthly active users. The Chinese government implements strict internet censorship policies. Popular non-Chinese services and platforms like Facebook, Twitter, and Youtube are banned, while Chinese services such as Weibo, the instant messaging app WeChat, and the Baidu search engine operate under tight regulations that require them to monitor and take down objectionable content.

Internet, mobile, and telecommunications companies should be transparent about how they handle government requests for content restrictions and publish transparency reports on such requests that include data on the number of requests received, the number they complied with, the types of subject matter associated with these requests. Most companies evaluated in the 2017 Corporate Accountability Index lacked transparency about how they handle government requests to restrict content or accounts, and did not disclose sufficient data about the number of requests they received or complied with, or which authorities made these requests.   

(more…)

Special rapporteur David Kaye. Source: un.org

As heated controversies and debates continue to rage about how governments, companies, and citizens should respond to the problem of disinformation and hate speech on social media, a forthcoming report on Content Regulation in the Digital Age by David Kaye, U.N. Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, could not be more timely and important. As Kaye described it in his call for submissions to help inform the report, his aim is to examine the “impact of platform content regulation on freedom of expression and examine appropriate private company standards and processes and the role that States should play in promoting and protecting freedom of opinion and expression online.”

RDR submitted a paper (downloadable as PDF here) tying key findings and recommendations from the 2017 Corporate Accountability Index to many of the questions that Kaye aims to examine in his June 2018 report. The 2017 Index findings highlighted how, despite the important role that internet platforms such as social media and search services play in mediating public discourse, and despite recent progress by some companies in disclosing policies and actions related to government requests, the process of policing content on internet platforms remains unacceptably opaque. As a result we found that:

  • Users of internet platforms cannot adequately understand how their online information environment is being governed and shaped, by whom, under what authorities, for what reason. When transparency around the policing of online speech is inadequate, people do not know who to hold accountable when infringements of their expression rights occur.
  • This situation is exacerbated by the fact that some of the world’s most powerful internet platforms do not conduct systematic impact assessments of how their terms of service policies and enforcement mechanisms affect users’ rights.
  • Furthermore, grievance and remedy mechanisms for users to report and obtain redress when their expression rights are infringed are woefully inadequate.

In light of these facts, we proposed the following recommendations for companies and governments:

1. Increase transparency of how laws governing online content are enforced via internet intermediaries and how decisions to restrict content are being made and carried out. Companies should disclose policies for decision making regarding content restrictions, whether at the request of governments, private actors, or carried out at the company’s own initiative to enforce its terms of service. They should also disclose data on the volume and nature of content being restricted or removed for the full range of reasons that result in restriction. Governments must encourage if not require such transparency and match it with transparency of their own regarding demands – direct as well as indirect – that they place upon companies to restrict content.

2.  Broaden impact assessment and human rights due diligence in relation to the regulation and private policing of content. Companies must conduct human rights impact assessments that examine policies and mechanisms for identifying and restricting content, including terms of service enforcement and private flagging mechanisms. They must disclose how such assessments are used identify and mitigate any negative impact on freedom of expression that may be caused by these policies and mechanisms. Governments should also assess existing and proposed laws regulating content on internet platforms to ensure that they do not result in increased infringement of users’ freedom of expression rights.

3. Establish and support effective grievance and remedy mechanisms to address infringements of internet users’ freedom of expression rights. When content is erroneously removed or a law or policy is misinterpreted in a manner that results in the censorship of speech that should be protected under international human rights law, effective grievance and remedy mechanisms are essential to mitigating harm. Adequate mechanisms are presently lacking on the world’s largest and most powerful internet platforms. Governments seeking increased policing of extremist and violent content by platforms should not only support but participate in the development of effective grievance and remedy mechanisms.

Click here to download the full submission.

Corporate Accountability News Highlights is a regular series by Ranking Digital Rights highlighting key news related to tech companies, freedom of expression, and privacy issues around the world.

U.S. Supreme Court to hear Microsoft email case  

The U.S. Supreme Court Building (Photo by Joe Ravi, licensed CC-BY-SA 3.0)

On February 27, the U.S. Supreme Court is set to hear a landmark case in which the U.S. Department of Justice is seeking to force Microsoft to hand over content of emails stored in a data center in Ireland. The case dates back to 2013 when a New York state judge issued a warrant requesting that Microsoft hand over Outlook email information belonging to a user, who was the subject of a drug-trafficking investigation. While the company agreed to hand over metadata stored in the U.S., it refused to hand over the content of the emails, which are stored in Ireland. If the Supreme Court rules in favor of the U.S. government, it would set a new precedent allowing governments to obtain data stored in other countries.

In July 2016, the U.S. Second Circuit Court of Appeals ruled in favour of Microsoft and quashed the U.S. government search warrant. The U.S. government sought the emails’ content unilaterally, under the Stored Communications Act, instead of submitting a Mutual Legal Assistance Treaty (MLAT) request to Ireland. MLATs are bilateral, multilateral or regional agreements that allow governments to exchange information related to an investigation.  

The U.S. government argues that using an MLAT process is “costly, cumbersome and time-consuming,” and is not needed since “the privacy intrusion occurs only when Microsoft turns over the content to the Government, which occurs in the United States.”

Microsoft argues that the emails stored in its data center in Dublin are protected by Irish and EU privacy laws, and the U.S. government should try to obtain the sought-after information using the United States-Ireland MLAT process. More than 250 signatories including leading tech companies, members of Congress, European lawmakers and advocacy groups signed 23 amicus briefs in support of Microsoft.

Companies should disclose information about their process for responding to government requests for user data including their processes for responding to non-judicial government requests and court orders, and the legal basis under which they comply with requests. In addition, companies should publicly commit to push back on inappropriate or overbroad government requests. Companies should also disclose and regularly publish data about these requests including, listing the number of requests received by country and number of accounts and pieces of content affected, and specifying the legal authorities making the requests.

(more…)

Corporate Accountability News Highlights is a regular series by Ranking Digital Rights highlighting key news related to tech companies, freedom of expression, and privacy issues around the world.

India’s Supreme Court reviews biometric database program

A woman receives an iris scan to enroll in the Aadhaar database (Photo by Biswarup Ganguly, licensed CC BY 3.0)

This week, India’s Supreme Court began its final hearings to consider the legality the government’s controversial biometric database program, reviewing nearly 30 petitions filed against the program over the past several years. Under the program, known as Aadhaar, individuals must enroll in a database—which requires submitting scans of their fingerprints and irises—in order to obtain a variety of government services, including paying taxes or receiving government subsidies. The Department of Telecommunications also requires telecommunications companies to collect Aadhaar numbers for new mobile customers, and has given current mobile phone users until March 31, 2018 to re-verify their SIM cards with their Aadhaar number to continue to receive service.

Privacy advocates have raised concerns over Aadhaar, especially after recent news highlighting privacy and security issues relating to the program. In December 2017, Bharti Airtels license to carry out Aadhaar-based SIM verification was suspended after it was discovered that it had been using this process to open bank accounts for mobile customers without their informed consent. The Tribune, an Indian newspaper, recently revealed that reporters were able to gain access to the database after paying about $8 USD to an anonymous individual who was selling access online. The Tribune reported that after entering an individual’s Aadhaar number they were able to view his or her personal information including name, address, postal code, photo, phone number, and email.

As noted in the 2017 Corporate Accountability Index recommendations, governments should work with the private sector and civil society to ensure that legal and regulatory frameworks make it possible for companies to respect digital rights. This includes respecting the right to anonymous online activity. Governments should refrain from requiring companies to document users’ identities when it is not essential to the provision of service. As UN Special Rapporteur for Freedom of Expression David Kaye has stated, anonymity is essential for individuals to exercise their right to freedom of expression, and deserves strong protections. (more…)