Corporate Accountability News Highlights is a regular series by Ranking Digital Rights that highlights key news related to tech companies, freedom of expression, and privacy issues around the world.

Turkish government blocks Wikipedia

Image via Wikimedia foundation (Licensed CC BY-SA 3.0)

The Turkish government blocked Wikipedia this week, citing a law that gives authorities the ability to block websites that it deems are obscene or a threat to national security. “Instead of coordinating against terrorism, it [Wikipedia] has become part of an information source which is running a smear campaign against Turkey in the international arena,” the government said. The Wikimedia Foundation issued a statement refuting the government’s claims and urging authorities to remove the block. “We strongly oppose censorship or threats that lead to self-censorship,” stated Wikimedia.

Blocking Wikipedia is the Turkish government’s latest crackdown on freedom of expression on online platforms. Turkey was rated “Not Free” in Freedom House’s annual Freedom on the Net report, which noted the government has, on numerous occasions, temporarily blocked social media services including Twitter, Facebook, WhatsApp and YouTube. According to Twitter’s most recent transparency report, Turkey had the greatest number of government requests for content removal, both in terms of court orders (844 requests) as well as requests from government agencies, police, and other government authorities (2,232 requests).

UK Parliament: Social media companies should do more to police content

A new report by the UK Parliament’s Home Affairs Select Committee calls on social media companies like Twitter, Facebook, and Google to do more to monitor and remove illegal content. The report criticizes these companies for being “shamefully far” from addressing “illegal and dangerous content,” claiming that they should be able to use the same technology used to identify and take down content for copyright infringement to identify and remove hate speech and extremist content. The report recommended the UK government consider fining companies that fail to remove illegal content quickly enough, referencing a law recently proposed in Germany.

Some privacy advocates warn that such efforts to curb extremist content could lead to increased government censorship and that automating the process could make it more likely that legal content is erroneously removed.

As highlighted in the 2017 Corporate Accountability Index, some companies, like Google, Microsoft and Twitter, are starting to publish data about content that they remove for violating their rules. For instance, in a 2016 blog post Twitter published some information on these takedowns. The company’s most recent transparency report included data about content that was removed, following a government request, due to terms of service violations.

Companies pledge compliance with EU data protection rules

Google, Microsoft, and Amazon have committed to ensuring that their cloud services will be compliant with new European Union rules on data privacy, which come into effect in May 2018. The General Data Protection Regulation (GDPR), which European lawmakers adopted in April 2016, specifies new, EU-wide privacy rules for handling personal information of EU citizens.

The GDPR requires companies to adhere to the principle of data minimization, to have accountability measures which may include appointing a Data Privacy Officer, and to abide by new requirements for reporting data breaches. Findings of the 2017 Corporate Accountability Index showed that the companies evaluated disclosed little information about policies for responding to data breaches. Only three of the 22 companies we evaluated revealed some information about whether they notify authorities or users who might be affected by a data breach.

Corporate Accountability News Highlights is a regular series by Ranking Digital Rights that highlights key news related to tech companies, freedom of expression, and privacy issues around the world.

Uber, Apple, and user privacy

The New York Times reported that in 2015 Uber ran afoul of Apple’s privacy rules for adding a feature in its iPhone app allowing it to identify devices even after users had deleted the Uber app or erased all contents on the device. The practice, known as “fingerprinting,” tracks devices using their Unique Device Identifier (UDID), which in 2013 Apple announced it would no longer allow app developers to do. According to the article, Uber engineers “geofenced” Apple’s headquarters in Cupertino, California in an effort to hide that portion of the code from Apple employees. After discovering the code in 2015, Apple CEO Tim Cook demanded that Uber stop fingerprinting devices or it would be banned from the App Store, according to The New York Times.

This issue puts a spotlight on the need for mobile ecosystem companies like Apple, Google, and Samsung, to have clear and transparent user-information collection and retention policies for third-party apps hosted on their app stores. Findings of the 2017 Corporate Accountability Index showed that all three mobile ecosystems evaluated fell short in this regard. While all three companies disclosed they require third-party apps that collect user data to have privacy policies, none disclosed that they review the content of these policies for compliance with app store rules.

German Court bans WhatsApp from sharing user data with other Facebook services

A German court has upheld an order banning Facebook from collecting data on WhatsApp users in Germany. The court ruled that Facebook, which owns WhatsApp, must obtain user consent before its other services can process user information obtained from WhatsApp. WhatsApp updated its terms of service and privacy policy in August 2016 to state that it could share certain user data with Facebook, like a user’s phone number, in order to improve targeting advertising. The German case is one of several ongoing legal challenges the company is facing in the EU over its WhatsApp user data-sharing practices.

Of the 12 internet companies evaluated in the 2017 Corporate Accountability Index, Facebook received the lowest score on our indicator evaluating disclosure of options users have to control what information the company’s collects, retains, and uses. Our research found that WhatsApp did not fully disclose the options users have to control what information is collected or how their information is used for targeted advertising.

ISPs in Kashmir ordered to block social media and messaging services

Authorities in the northern India state of Jammu and Kashmir have ordered all ISPs to block 22 social networks and messaging apps for one month or until further notice. The services include social networks Facebook, Twitter, and QZone, and messaging and VoIP services and apps Skype, WhatsApp, and WeChat, which authorities claim were “being misused by anti-national and anti-social elements” in the Kashmir Valley to disturb “peace and tranquility.” Authorities previously ordered telecommunications companies to suspend 3G and 4G mobile internet services after several videos circulating online of security forces abusing civilians drew outrage from Kashmiris.

The rise of network shutdown orders by governments has sparked growing concerns by human rights groups and policy makers around the world. In 2016, India had the highest number of internet shutdowns in the world, with 31 instances of internet shutdowns in Jammu and Kashmir since 2012, according to the Software Freedom Law Centre. The UN Human Rights Council in 2016 condemned network shutdowns as a violation of international human rights law and called on governments to refrain from taking these actions. At the same time, companies should push back on government demands to shut down networks, and clearly explain the circumstances under which they comply with such requests. Findings of the 2017 Corporate Accountability Index showed that all telecommunications companies evaluated failed to meet this obligation to varying extents and none disclosed sufficient information about their policies for responding to network shutdown requests.

Corporate Accountability News Highlights (we are still experimenting with the name) is a new series by Ranking Digital Rights that highlights key news related to tech companies, freedom of expression, and privacy issues around the world.

Hungarian Government in Hot Water Over Data Privacy

Hungarian Prime Minister Viktor Orbán and Russian President Vladimir Putin (Image via Kremlin.ru, licensed under a Creative Commons Attribution 4.0 International license)

The Hungarian government’s recent national consultation about EU policies on immigration and economic issues, “Let’s Stop Brussels!,” has come under fire not just for its skewed survey design, but also for the way that its website originally handled individuals’ data. As reported by the Hungarian investigative reporting outlet 444, the online survey portal originally included code for Yandex Metrika, a website analytics tool offered by Russian internet company Yandex (the code was removed from the site after the 444 story was published).The choice of a Russian website analytics tool is interesting in light of Hungarian Prime Minister Viktor Orbán’s moves for closer ties with Russia, which also prompted an opposition party campaign to place stickers on top of the government’s billboards about the consultation so they instead read “Let’s Stop Moscow!”

In addition to raising eyebrows over the potential geopolitical significance, the Hungarian government’s use of Yandex’s code also raised significant privacy concerns. Yandex Metrika includes a feature called “webvisor” which, when enabled, allows administrators to track mouse movements, clicks, keystrokes, entries, and other data to monitor how users interact with their sites. According to 444, not only was this feature enabled on the consultation website, but it was also set up to capture the information a user typed into all fields on the website—including name, age, and email address—potentially violating the site’s privacy policy, which stated that users’ personal data would not be shared with any third parties.

Although the 2017 Corporate Accountability Index did not examine Yandex Metrika as a service, we did evaluate Yandex as a company and several other services. We found that overall, Yandex had limited disclosure of its policies for collecting, using, sharing, and retaining user data. As noted in the Index’s Russian company analysis, Russian law enforcement authorities may have direct access to communications data through a mass surveillance system known as SORM.

This incident also highlights the importance of writing a clear and specific privacy policy and ensuring that all services used on the site are in compliance with the policy, so that users are aware of with whom they are sharing their data.

Facebook Cracks Down on Content

Facebook recently announced in a blog post that as part of its efforts in combatting spam, fake accounts, and “deceptive content,” it had taken action against over 30,000 accounts in France. This move comes shortly before the French presidential election, which according to Reuters, was a key motivator for the company’s efforts to combat misinformation on the platform.

In the 2017 Index, while Facebook received credit for disclosing some data about content that it restricts in response to government requests, the company was found to disclose no information about content and accounts it restricts for violating its terms of service. Although the disclosure in the recent blog post is a step in the right direction, the company should include such information in its transparency report, and also include data on actions it has taken to restrict content due to other reasons.

We (can’t) Chat – Citizen Lab Research on WeChat and Weibo Content Filtering

New research from Citizen Lab examining content filtering on two Chinese messaging and social networking platforms, WeChat (operated by Tencent, which was included in the 2017 Index) and Sina Weibo (not included in the 2017 Index), found evidence of image-based filtering on WeChat. Although it is understood that WeChat, along with other Chinese internet platforms and apps, filters sensitive keywords, this is the first documented instance of similar filtering based on images deemed “sensitive” (in this case, content relating to the detention of Chinese lawyers and activists).

In our 2017 Index, we noted that Tencent had limited disclosure on processes it uses to identify content or accounts that violate the company’s rules, and almost no disclosure on its processes for responding to third party requests for content removals. Both Chinese companies in the Index, Baidu and Tencent, had more limited disclosures on policies relating to users’ freedom of expression than for privacy.

New study claims the angle users hold their phones can help hackers guess PINs

New research from Newcastle University reveals how motion sensor data from when a user types a PIN into their phone can help hackers identify what that PIN is. This data alone is not enough for a would-be hacker to gain access, especially without also knowing how an individual holds his/her phone when typing in certain numbers. However, the study’s authors also noted that unlike other a phone’s camera or microphone, many mobile apps and websites can access motion sensor data without asking a user’s permission, and that “people were far more concerned about the camera and GPS than they were about the silent sensors.”

This study is one example of why app permissions are important, as many apps may have access to this type of user data, and how information that’s not treated as sensitive for app permissions may help give away more private information than users may think. It’s important that mobile ecosystems serve as better gatekeepers for user privacy in their app stores. The Index looks for company disclosure that they review privacy policies of apps in a way that provides adequate privacy safeguards for users.

Digital rights groups in India and Pakistan have adapted the Ranking Digital Rights Corporate Accountability Index methodology to evaluate if and how telecommunications and internet companies in those countries disclose commitments to users’ freedom of expression and privacy.

The Centre for Internet and Society, an internet research institute based in India, applied the 2017 Index methodology to evaluate eight telecommunications and internet companies operating in India. Findings showed while companies demonstrated some commitment to users’ privacy, most fell short in key areas. The organization held an event in January 2017 to launch the report as well as a rankathon for participants to learn more about the companies evaluated and to provide feedback on the methodology and ways to adopt it for future research.  

In December 2016, Pakistan-based Digital Rights Foundation published a study examining privacy-related disclosures of five telecommunications companies, based on privacy indicators adapted from the Index methodology. Findings showed that not all privacy policies were available in Urdu or other languages commonly spoken in Pakistan, and that company policies for responding to government requests for user data were often unclear. The organization’s executive director, Nighat Dad, discussed some of these findings at our roundtable session at RightsCon on how to conduct research and advocacy focused on ICT companies.

These types of projects using the Index methodology allow for more in-depth analysis of how companies in different countries or regions commit to respecting users’ rights. We encourage researchers and civil society to adapt the Index methodology to launch research initiatives evaluating company disclosures of policies affecting users’ freedom of expression and privacy in their own countries and contexts.

The Corporate Accountability News Roundup is a new series by Ranking Digital Rights that highlights key news related to tech companies, freedom of expression, and privacy issues  around the world.

Twitter takes on Trump

According to Twitter, the Trump administration last week withdrew its attempt to force the company to reveal the identity of one its users. The account, @ALT_USCIS, is one of several “alt agency” accounts created after President Donald Trump took office, and which tweets criticisms of the administration. In response, Twitter announced it was suing the US government on grounds the demand was “unlawful and unenforceable because it violates the First Amendment rights of both Twitter and its users by seeking to unmask the identity of one or more anonymous Twitter users voicing criticism of the government on matters of public concern.” Twitter dropped its lawsuit after the government withdrew the summons.

The Ranking Digital Rights 2017 Corporate Accountability Index looks for companies to disclose their processes for responding to government requests for user information, including if the company carries out due diligence on government requests before deciding how to respond, and commits to push back on inappropriate or overbroad government requests. This recent case is an example of Twitter implementing these commitments. Our research showed that Twitter clearly disclosed its processes for responding to government requests for user information (P10) and also topped all internet and mobile companies evaluated for its transparency reporting on the government and private requests it receives to hand over user information (P11).

In Europe: bans on encryption, hate speech

The EU Justice Commissioner Věra Jourová has indicated the European Commission will propose new rules this June allowing law enforcement to access information from encrypted apps. This follows pressure from the governments of France, Germany, and the UK–including the recent call from UK Home Secretary Amber Rudd’s for police to be able to access encrypted chats from WhatsApp and similar services. However, as technical experts have continued to caution, allowing such access would prevent companies from being able to deploy secure end-to-end encryption and would put user privacy at risk. In our recommendations for companies, we note that except where permitted by law, companies should publicly commit to implement the highest encryption standards available, including end-to-end encryption. The EU’s proposed rules could prevent companies from being able to do so.

Germany’s cabinet has approved a plan that would fine social media networks for not removing hate speech content quickly enough. This plan raises numerous freedom of expression concerns, and puts companies, rather than courts, in the position of determining what speech is legally permissible. As noted in our recommendations for governments, authorities should limit legal liability imposed on companies for their users’ speech and other activities, in consistency with the Manila Principles on Intermediary Liability.

New rumors about Google’s return to China

The South China Morning Post reported that Google is engaged in talks with the Chinese government to potentially re-enter the Chinese market with certain services, such as Google Scholar. Google has not confirmed or commented about these discussions or whether or not it plans to re-enter China. However, the company did announce that users in China can now download the Translate app, which the Washington Post writes may be a signal that the company is slowing moving back into the Chinese market. In 2010, Google formally withdrew from China and shut down its Chinese-language search engine, citing concerns over censorship. Google’s products are currently blocked by the Chinese government’s “Great Firewall” and are not available to Chinese users without use of censorship circumvention technology.

Tech companies should conduct human rights impact assessments (HRIAs) before launching new services or entering new markets, to identify any risks to user free expression and privacy and take necessary steps to mitigate these risks. This is particularly important before launching any services in markets, such as in China, where the government has a record of human rights abuses.

As our research in the 2017 Index showed, Google expressed a commitment to carry out HRIAs, stating: “Prior to localizing in a new market, the company approach is to first examine the government’s record with respect to freedom of expression and privacy by consulting reports prepared by NGOs and analyzing the laws that are relevant for freedom of expression and privacy in that country.” Therefore, if reports that Google is considering re-entry into China are true, Google should live up to its stated commitments and be carrying out human rights impact assessments that will help it determine whether or how its presence in China may change in the future.
Our research indicates that although many aspects of Chinese companies’ poor performance in the 2017 Index can be blamed on China’s legal and regulatory environment, there are some areas in which companies still have room to improve their disclosures on certain privacy and free expression issues. For more on this, check out our analysis comparing free expression and privacy disclosures from Baidu and Tencent.