Corporate Accountability News Highlights: EU Parliament committee endorses end-to-end encryption, companies are behind in preparing for new EU data rules, and U.S. net neutrality debate resurfaces

Corporate Accountability News Highlights is a regular series by Ranking Digital Rights that highlights key news related to tech companies, freedom of expression, and privacy issues around the world.

EU Parliament committee endorses end-to-end encryption

European Parliament, image via Wikipedia

A European Parliament committee is proposing that end-to-end encryption be mandatory for all electronic communications. The proposal calls for  amending the EU Charter of Fundamental Rights to include online privacy. It also includes a ban on encryption “backdoors” that give governments access to encrypted communications. “Member states shall not impose any obligations on electronic communications service providers that would result in the weakening of the security and encryption of their networks and services,” according to the proposal.

This is a stark contrast to recent discussions among officials in the UK, Germany, and Australia who say authorities should be able to access encrypted communications to stop terrorism. As highlighted in the 2017 Corporate Accountability Index, governments should not pass measures that undermine encryption. As the EU Parliament committee’s proposal asserts, “The protection of confidentiality of communications is also an essential condition for the respect of other related fundamental rights and freedoms, such as the protection of freedom of thought, conscience and religion, and freedom of expression and information.”

Companies not ready for new EU data protection rules

The Financial Times reports that European companies are unprepared for the EU’s new data protection regulations that come into force in less than a year. Many businesses are “dramatically underestimating” the impact of the General Data Protection Regulation (GDPR), according to the report, and appear to be behind schedule in making necessary changes, or are unaware of their obligations under the new rules. While the law is currently in effect, companies have until May 2018 to be compliant with the rules. The Irish Times also cited a survey showing that two-thirds of 150 businesses in Ireland “did not realize what they would have to do regarding the GDPR.”

Any company that handles personal data of EU citizens must comply with the GDPR. The rules cover a wide range of data protection issues, and include new requirements for handling personal data and reporting data breaches. Findings of the 2017 Corporate Accountability Index showed that most companies lacked transparency about how they handle user information, and only three of the 22 companies evaluated disclosed any information about their process for responding to data breaches.

Companies and rights groups to protest net neutrality rollback in the U.S.

Several companies, including Amazon, Netflix, and Reddit are joining with civil society advocates for an “internet-wide day of action to save net neutrality” to protest the Federal Communications Commission (FCC) plan to repeal the current net neutrality rules. In February 2015, the FCC classified internet service providers as “common carriers” under Title II of the Communications Act, protecting the principle of net neutrality—requiring carriers to treat all types of content and traffic equally. The measure was hailed by internet rights groups since it created strong protections for net neutrality, helping to ensure equal access to content and the free flow of information online.

In May 2017, the FCC voted to begin the process of repealing the 2015 net neutrality rules and the Title II classification for ISPs. On July 12, websites participating in the day of action will display a message about the importance of net neutrality and provide a prompt for users to submit a comment to the FCC and Congress in support of strong net neutrality protections.

While some telecommunications companies support net neutrality, our research shows they many lack transparency about their network management policies and practices. The Corporate Accountability Index evaluates if companies disclose whether they engage in practices that affect the flow of network traffic, like by prioritizing certain content or throttling traffic. We expect companies to avoid these types of practices unless for legitimate traffic management reasons, like to ensure the flow of traffic through their networks. If companies do engage in throttling, traffic shaping, or prioritization, we expect them to publicly disclose this and to explain their purpose for doing so. Of the ten telecommunications companies evaluated in the 2017 Index, Vodafone was the only company to clearly disclose a commitment to not prioritize, block, or delay certain types of traffic other than for assuring quality of service and reliability of the network.

UN expert says companies must do more to advance freedom of expression online

Many of the most serious threats to human rights online—like censorship, surveillance, and network shutdowns—are driven by governments, but are often carried out by companies. It is well-established that states have an obligation to protect human rights, but what responsibilities do companies have?

On June 12, UN Special Rapporteur on Freedom of Expression, David Kaye, presented his latest report to the UN Human Rights Council, addressing this question. The report analyzes the human rights responsibilities of internet service providers and telecommunications companies, and draws upon meetings and consultations with private sector and civil society actors, including a written submission from Ranking Digital Rights.

Kaye’s report states that governments should work to protect and promote freedom of expression online, including taking steps to limit companies from interfering with human rights. Government actions, such as surveillance or policies that undermine encryption, can also compel companies to violate human rights.

The report also found that governments and companies are insufficiently transparent with the public about demands being placed on companies and how companies are implementing those demands. “A lack of transparency pervades government interferences with the digital access industry,” according to Kaye.

In some cases, this is due to vague laws that give authorities overly broad powers to shut down networks or prevent companies from publicly disclosing information about government access to user data. However, as the report notes, even if companies are legally prohibited from being fully transparent about how they respond to such government requests, they should seek to disclose the maximum amount of information possible under the law—which, according to RDR’s research, most companies are failing to do. Even more troubling, while companies may be legally obligated to comply with government requests for censorship and surveillance, Kaye’s report cites instances in which companies appear to have gone above and beyond their legal obligations in assisting with government surveillance activities.

Companies should also disclose more information about their own policies and practices that affect individuals’ rights to freedom of expression—such as terms of service, content moderation practices, privacy policies, and data collection practices. They should consult with users, civil society, and fellow companies on best practices for transparency, according to the report. Kaye also observes that multistakeholder engagement allows companies to benefit from external expertise and accountability. Citing RDR’s 2015 Index findings, the report highlights how membership in initiatives such as the Global Network Initiative and the Telecommunications Industry Dialogue often correlates with greater institutional commitments to respecting human rights. Our 2017 Index found even stronger evidence of that correlation with the addition of several more companies.

Governments and corporations each have steps they could take to significantly improve their human rights commitments, and also to improve their overall transparency about how these commitments are actually implemented. As the report notes, internet users “deserve to understand how those actors interact with one another, how these interactions and their independent actions affect us and what responsibilities providers have to respect fundamental rights.”

Read the full report here.

Read RDR’s submission here.

Corporate Accountability News Highlights: Russia moves forward with banning anonymous use of messaging apps, Australian government sets its sights on encryption, and research finds majority of apps share user data with third parties

Corporate Accountability News Highlights is a regular series by Ranking Digital Rights that highlights key news related to tech companies, freedom of expression, and privacy issues around the world.

Russian legislature considering banning anonymous use of messaging apps

Identity policy scores from the 2017 Corporate Accountability Index

Russian lawmakers are discussing a bill that would ban anonymity on certain messaging apps. If passed, the law would require users to verify their identities using their mobile phone number, and services that continue to allow anonymous users could be fined or blocked. Russian law requires telecommunications companies to verify the identities of their subscribers; mobile phone numbers are therefore directly linked to an individual’s offline identity. In a similar move against online anonymity, Russian lawmakers also recently submitted draft legislation to ban Virtual Private Networks (VPNs) and other internet anonymizers that allow internet users to access blocked content.

The ability to communicate anonymously is essential to ensuring freedom of expression. The 2017 Corporate Accountability Index evaluates if companies disclose whether they require users to verify their identity with a government-issued identification or with other forms of identification that could be connected to their offline identity. Our research showed that 10 of 12 internet and mobile companies disclosed a policy requiring users to verify their identity as a condition of using at least one of the company’s services evaluated. Of the two Russian companies, Yandex disclosed that it may require users to verify their identities for all services, while Mail.Ru disclosed this requirement for one of its services.

As noted in our recommendations, governments should respect the right to anonymous online activity as central to freedom of expression, privacy, and human rights, and refrain from requiring companies to document users’ identities when it is not essential to the provision of service.

Australian government suggests plans for greater access to encrypted communications

The Australian government announced it is considering legal measures that would place greater obligation on companies to assist authorities in decrypting user communications. Prime Minister Malcolm Turnbull called for stronger cooperation between tech companies and authorities to fight extremism, so that “terrorists and organized criminals are not able to operate with impunity in ungoverned digital spaces online.” The Australian government said it plans to raise this issue with the other members of the Five Eyes intelligence network (the U.S., UK, Canada, and New Zealand), during the group’s next meeting later this month.

As highlighted in the 2017 Corporate Accountability Index, governments should not adopt measures to undermine encryption. Strong encryption is vital not only for human rights, but also for economic and political security, and technical experts note that weakening encryption creates significant risks to privacy and security.

Research finds majority of apps share user data with third parties

According to researchers at UC Berkeley and IMDEA Networks Institute, more than 70% of apps share user data with third-party services. Once users give an app permission to access their data, app developers can share this with third-party companies, potentially without the user’s knowledge or consent, according to the research. And if different apps use code from the same third-party software library, these software library developers may be able to aggregate user information from different apps to build detailed user profiles.

These findings are based on data collected through an Android app called the Lumen Privacy Monitor, which allows users to monitor what information the apps they have installed are collecting, and with whom it is being shared. The app, after obtaining user consent, also shares non-personal data with researchers about the scope of data being collected and shared.

Of the three mobile ecosystem companies (Apple, Google, and Samsung) evaluated in the 2017 Corporate Accountability Index, none disclosed that they evaluate whether the third-party apps offered in their app stores disclose what information they share with other third-parties, and with whom they share it.

Corporate Accountability News Highlights: UK Government calls for increased internet regulation, Brazil holds hearings on WhatsApp blocking, and Weibo users face restrictions on Tiananmen anniversary

Corporate Accountability News Highlights is a regular series by Ranking Digital Rights that highlights key news related to tech companies, freedom of expression, and privacy issues around the world.

UK Government calls for increased internet regulation following terror attack

UK Prime Minister Theresa May (Image via Number 10, licensed CC BY-NC-ND 2.0)

In response to the most recent terror attack in London, UK Prime Minister Theresa May is calling for tighter internet regulations in order to combat terrorism. May said she plans to pursue international agreements to regulate cyberspace and criticized internet companies for providing “safe spaces” that allow extremists to communicate.

Adding to May’s comments, UK Home Secretary Amber Rudd said the government wanted tech companies to do more to take down extremist content and to limit access to end-to-end encryption.

In response, some tech companies defended their efforts to identify and remove extremist content. “Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it,” according to Facebook.

Human rights advocates warn that government attempts to regulate the internet could threaten freedom of expression and that efforts to address online extremist content must be consistent with international human rights norms. According to David Kaye, UN Special Rapporteur on freedom of expression, efforts to combat violent extremism can be the “perfect excuse” for both authoritarian and democratic governments to restrict freedom of expression and control access to information.

Brazil’s Supreme Court holds hearings on WhatsApp blocking

Brazil’s Supreme Court has held hearings to decide whether it is legal for courts to direct telecommunications companies to block apps like WhatsApp. Lower courts on numerous occasions have ordered telecommunications companies to block WhatsApp (which is owned by Facebook), after the company refused to turn over user information requested as part of criminal investigations. Judges in these cases based their rulings on an interpretation of Brazil’s Marco Civil law, which some civil society groups argue is an improper application of the law.

WhatsApp co-founder Brian Acton, who testified at one of the hearings, explained why the company is unable to turn over user information, “Encryption keys relating to conversations are restricted to the parties involved in those conversations,” Acton said. “No one has access to them, not even WhatsApp.” The court is expected to issue a ruling on the matter in the next few weeks.

Of the eight messaging and VoIP services evaluated in the 2017 Corporate Accountability Index, WhatsApp earned the highest score for its disclosure of encryption policies. The company disclosed that transmissions of user communications are encrypted by default using unique keys, as well as that end-to-end encryption is enabled for users by default.

Weibo users censored on Tiananmen anniversary

Users of Chinese social network service Weibo faced restrictions when attempting to post photos or videos on the platform during the anniversary of the 1989 Tiananmen Square pro-democracy protests. Users outside of China were unable to post photos or videos, and users within China also reported that they were unable to change their profile information or post photos and videos as comments during this time.

Although references to Tiananmen Square are regularly censored in China, both by the government and by technology companies, censorship is often heightened in early June each year, as documented by groups such as Freedom House. In a post on June 3, Weibo said that some of the platform’s functions would not be available until the 5th due to a “systems upgrade.” Comments on the post were disabled, and according to a Mashable reporter, “when we attempted to post a comment saying: ‘I want to comment,’ a notice popped up saying that the comment was in violation of Weibo’s community standards.”

As research in the 2017 Corporate Accountability Index showed, Chinese companies are legally liable for publishing or transmitting prohibited content, and services that do not make a concerted effort to police such content can be blocked in China.

Corporate Accountability News Highlights: Facebook internal documents highlight lack of transparency on content removals, Apple reveals it received a National Security Letter, and WeChat unveils new search feature

Corporate Accountability News Highlights is a regular series by Ranking Digital Rights that highlights key news related to tech companies, freedom of expression, and privacy issues around the world.

The Guardian publishes Facebook internal content guidelines, offering glimpse into opaque process

Image by Veluben (Licensed CC BY-SA 3.0)

An investigation by the Guardian has revealed new details about Facebook’s internal rules for policing and removing content. The “Facebook Files” is based on information from internal training manuals for the company’s moderators leaked to the Guardian, which outline rules for reviewing and removing content, including violent, sexually explicit, extremist, racist, and other types of sensitive materials. Facebook moderators reported having little time to decide whether to remove a given item and voiced concern over confusing and inconsistent content removal policies, according one report.

In response to the Guardian’s investigation, Facebook said they “don’t always share the details of our policies, because we don’t want to encourage people to find workarounds – but we do publish our Community Standards, which set out what is and isn’t allowed on Facebook, and why.”

The Guardian’s investigation puts a spotlight on the lack of transparency by social network companies regarding their internal policies for evaluating and removing content. Findings of the 2017 Corporate Accountability Index showed that most companies evaluated failed to disclose any information about the volume and nature of content they remove for terms of service violations. Only three companies of the 22 evaluated—Google, Microsoft, and Twitter—received any credit for doing so. Facebook does not publish data on any type of content removed for violating its rules, and did not receive any credit on this indicator.

Apple discloses it received a now-declassified National Security Letter

In its latest transparency report, Apple revealed for the first time that a National Security Letter it received from the U.S. government has been declassified, though the company did not publish the letter itself. The USA Patriot Act allows the government to compel a company to disclose user information by issuing a National Security Letter, without a court order. National Security Letters also include a gag order preventing companies from disclosing any information about the request, including to affected users. Some companies have been able to publish some of the national security orders they’ve received through legal challenges and following the passage of the 2015 USA Freedom Act.

Apple also reported it received double the number of national security orders in the last half of 2016 compared to the first half of the year. Apple defines “national security orders” as requests made under the Foreign Intelligence Surveillance Act (FISA) as well as National Security Letters it received. As the company notes, U.S. law allows companies to publish national security requests only within a certain range, but prohibits them from publishing the specific number.

In the 2017 Corporate Accountability Index, Apple received the fourth-highest score of the 12 internet and mobile companies evaluated for its disclosure of data about government and other third-party requests for user information, including the number of requests it receives and with which it complies, the legal authority requests are made under, and what types of user information are requested.

WeChat launches new search feature

WeChat, a messaging app owned by Chinese internet company Tencent, recently launched a new search feature that has been called a “direct challenge” to Chinese search engine Baidu, which currently dominates the Chinese internet search market. Rather than aggregating search results from across the web, the new feature will retrieve content from within WeChat, including posts from a user’s friends and news articles published directly to WeChat. Articles published on WeChat have their own URLs, but Baidu’s search engine is unable to index them.

In the 2017 Corporate Accountability Index, Baidu’s search engine was the lowest-scoring search engine of the five evaluated due to its low levels of disclosure on policies affecting users’ rights to freedom of expression and privacy. Baidu was also the only company in the entire Index to receive no credit in the Governance category, which evaluates a company’s institutional commitments to freedom of expression and privacy principles. Overall, although Tencent disclosed more information than Baidu about its policies affecting users’ rights, both companies disclosed less about policies affecting users’ freedom of expression than about privacy-related policies.