Corporate Accountability News Highlights is a regular series by Ranking Digital Rights highlighting key news related to tech companies, freedom of expression, and privacy issues around the world.
Facebook data breach tests GDPR
Facebook could be hit with a $1.63 billion fine over its recent data breach affecting 50 million users. Irish data watchdogs this week opened an investigation over whether the company’s handling of the breach violated the EU’s new privacy rules that came into force in May 2018.
The company last week revealed that hackers gained access to the accounts of at least 50 million Facebook users. Roughly 90 million users were automatically logged out of their accounts as a precaution. Less than 10 percent of affected users are located within the European Union, according to a tweet sent out by Irish regulators.
The case is the first test of the General Data Protection Regulations (GDPR), the EU’s sweeping privacy rules that carry stiff financial penalties for companies that violate the rules. The GDPR requires any “data processor” to safeguard the user information it handles, and to notify regulators and affected users of a breach within 72 hours. According to CNBC, while Facebook appears to have notified regulators of the data breach, Irish regulators will investigate whether the company has violated the GDPR requirements to take appropriate security measures for safeguarding people’s data. If the company is found to not have done enough to protect user information in violation of the GDPR, it could be fined 4 percent of its global revenue, or $1.63 billion.
Internet, mobile, and telecommunications companies collect, store, and share vast amounts of information about users and should have clear policies in place for keeping this data secure. They should also clearly disclose their policies for addressing data breaches in the event that they occur. Findings of the 2018 Corporate Accountability Index showed that while Facebook disclosed more than most internet and mobile companies evaluated about its processes for addressing security vulnerabilities, the company failed to provide any information about its policies for responding to data breaches, including policies of notifying affected users.
Tech companies pledge to help the EU fight misinformation
A group of companies that include Facebook and Google have signed on to a new initiative to fight the spread of misinformation online, as part of the EU’s effort to combat news manipulation and interference ahead of the 2019 European parliamentary elections. The European Commission’s Code of Practice on Disinformation asks companies to monitor and voluntarily remove “verifiably false or misleading” content and to increase transparency of political advertising.
The initiative was first proposed in April, when the Commission convened a multistakeholder forum that included online platforms, advertisers, journalists, and civil society to discuss self-regulatory solutions for addressing the spread of misinformation on social media and internet platforms. Hailed by proponents as a key step in combating misinformation, the plan has been criticized by media and civil society stakeholders for lacking “measurable objectives,” enforcement tools and oversight, Euractiv reports.
In 2016, the European Commission introduced a similar self-regulatory initiative aimed at combating the spread of hate speech online. A group of companies—including Facebook, YouTube (Google), Twitter, and Microsoft—signed onto the code, despite warnings by critics that the plan gave private companies too much power to censor content.
While private companies have the right to establish rules about what type of content is prohibited on their platforms, they should be transparent about the rules and how they are enforced. Companies should also disclose how they handle external government and private requests to remove content. Findings of the 2018 Index showed that most internet platforms lacked transparency about the volume and nature of content removed as a result of private processes. Ranking Digital Rights urges companies to clearly disclose how much and what types of content it has removed, filtered, or restricted, and why, and to notify users when it does so, and for what reason.
Trump administration opposes Google’s Chinese search engine
The Trump administration says it opposes Google’s efforts to re-enter the Chinese market. The Wall Street Journal reports that Vice President Mike Pence this Thursday called on the company to end the development of a search engine called Dragonfly, a confidential project rights groups say will enable internet censorship and compromise user privacy.
News of the project was first reported by The Intercept, which revealed that the Dragonfly search engine and news app will blacklist websites and search terms according to the Chinese government’s rigid censorship demands. The Chinese government has developed an increasingly sophisticated internet censorship system (called the “Great Firewall”) that filters and blocks information about human rights, political dissent, and other blacklisted topics. According to documents leaked to The Intercept, Google’s Dragonfly would have an automatic filter for banned sites and search results. Further reports indicate that user search results will be tracked by linking searches to individual phone numbers.
Google exited China in 2010 following disputes with authorities over its censorship practices targeting human rights activists. Plans to re-enter China have sparked new criticism from rights groups who say that the Dragonfly search engine will help the government’s extensive censorship and surveillance practices. Companies should conduct comprehensive and credible human rights risk assessments before launching new products or entering new markets in order to mitigate the freedom of expression and privacy risks to users. They must also be fully transparent about how much content it filters or removes at the behest of governments, and why, as well as their processes for handling government requests for user data.