Corporate Accountability News Highlights is a regular series by Ranking Digital Rights highlighting key news related to tech companies, freedom of expression, and privacy issues around the world.

Google under scrutiny over its collection of user data

Photo by user albersHeinemann on Pixabay

Google is facing a lawsuit for allegedly misleading users about collection of location data even when the ‘’Location History’’ setting is turned off.

In the lawsuit, filed in a federal court on August 17 in San Francisco, attorneys representing a man named Napoleon Patacsil argued that Google is violating the California Invasion of Privacy Act and the state’s constitutional right to privacy. The lawsuit is seeking a class-action status to represent all Google mobile users in the US, on both Android devices and iPhones.

The lawsuit was filed just days after the publication of an Associated Press report that found that ‘’many Google services on Android devices and iPhones store your location data even if you’ve used a privacy setting that says it will prevent Google from doing so.’’

The privacy setting in question is called ‘’Location History,’’ which users can turn off. ‘’With Location History off, the places you go are no longer stored,’’ Google’s support page on the matter previously stated. The company has since edited the page to clarify it continues to track users’ location even when the setting is disabled.

The company is facing additional scrutiny over the sweeping amounts of user data it collects on users, following the release of a new study, which found that ‘’a major part of Google’s data collection occurs while a user is not directly engaged with any of its products.’’

Internet, mobile, and telecommunications companies should be transparent about how they handle user information including which user information they collect and how, and for what purposes. The 2018 Corporate Accountability Index found that while Google was transparent about the types of user information it collects and how it collects it, the company failed to disclose that it limits the collection of user information to what is directly relevant and necessary to accomplish the purpose of its services. Out of 22 companies ranked by the Index, only three — Kakao, Samsung and Yandex — published clear disclosures stating that they minimize the collection of user information to what is relevant and necessary to accomplish the purpose of their services.

The facts uncovered by the Associated Press also underscore the need for systematic, regular, and independent technical testing to verify whether company policy disclosures, including those that RDR tracks and evaluates, are fully consistent with technical reality.

(more…)

Image by Warren R.M. Stuart (licensed CC BY-NC-ND 2.0)

While mobile applications don’t always offer the level of privacy and security that consumers expect, many top peer-to-peer (P2P) payment services raise no major privacy and security red flags, according to new research by Consumer Reports, Ranking Digital Rights, and Disconnect, a privacy software manufacturer.

Consumer Reports rated five mobile P2P applicationsApple Pay, Facebook P2P Payments, Square Cash, Venmo, and Zelle stand-alone servicebased on a set of privacy and security standards, including how well they authenticate payments to prevent fraud, secure user data, and protect privacy.

While Apple Pay earned top marks for its payment authentication and privacy measures, all five applications were rated as “good enough to use,” according to Consumer Reports. 

The ratings are based on a set of criteria called the Digital Standard, developed in partnership with leading privacy, security, and human rights organizations, including Ranking Digital Rights. This P2P rating is the latest round of collaborative research and testing that uses the Digital Standard to evaluate applications and internet-connected products that make up what is often called the “internet of things.” The goal of the Digital Standard is to encourage companies to prioritize privacy and security and to help consumers make informed choices.

Here are some highlights from the findings:

  • Apple Pay rated the highest on data privacy, as Apple states that it does not store consumers’ original credit card numbers and limits information sharing to a few service-specific purposes.
  • While all five P2P apps enabled users to set up PINs or two-factor authentication for an additional level of security, Apple Pay was the only service that requires authentication for each payment by default.
  • All the P2P apps provided data encryption and most disclosed that they implement internal safeguards to secure data.

To read more about the findings and how the different apps performed, see the full report here.

Corporate Accountability News Highlights is a regular series by Ranking Digital Rights highlighting key news related to tech companies, freedom of expression, and privacy issues around the world.

Alex Jones caricature by Flickr user DonkeyHotey (CC BY 2.0)

Tech giants ban conspiracy theorist Alex Jones

This week, Apple, Facebook, Google, and other social media and tech companies took steps to ban InfoWars, a website and media platform produced by right-wing conspiracy theorist Alex Jones.

Apple Podcasts removed five of six podcasts produced by InfoWars for violating its policy that ‘’does not tolerate hate speech.” Facebook took down four Infowars pages for ‘’repeated violations’’ of the site’s guidelines, including “glorifying violence” and “dehumanizing immigrants.” Youtube terminated Jones’ channel of 2.4 million subscribers for violating its community guidelines.

Other services that took measures to ban Jones’ InfoWars include Spotify, Pinterest, audio streaming app Stitcher, MailChimp, Linkedin and even the adult-video website YouPorn.

Jones is behind a number of controversial conspiracy theories, such as the 9/11 attacks were an ‘’inside job,’’ the Sandy Hook school shooting was a hoax, and that Obama is a ‘’radical Muslim’’ (all false allegations). Actions taken by major platforms this week were in relation to violations of their policies against hate speech and harmful content.

The measures came a few weeks after Facebook, Spotify and Youtube (Google) removed content by Jones for violating their terms of service and policies.

Spotify previously removed specific episodes of the Alex Jones Show before shutting down the entire podcast this week. Three other Infowars podcasts are still live on the service, according to The Guardian.

Twitter, however, has not banned InfoWars or Jones. Twitter CEO Jack Dorsey explained that his company did not ban Jones and Infowars because they ‘’did not violate our rules.’’

Internet, mobile and telecommunication companies should be transparent about what their rules are and how they enforce them. For example, companies need to clearly disclose whether any government authorities or private entities receive priority consideration when flagging content to be restricted for violating the company’s rules. They should also regularly publish data about the volume and nature of actions taken to restrict content or accounts that violate the company’s rules. The 2018 Corporate Accountability Index found that while most of the 22 companies evaluated disclosed at least some information about what content and activities they do not allow and how they enforce their rules, only four companies — Twitter, Microsoft, Facebook and Google — published data about such restrictions.

Companies should also notify users when they restrict content. Services that host user-generated content should notify those who posted the content and users trying to access it. The notification should include a clear reason for the restriction. The 2018 Index found that companies do not disclose sufficient data about their user notification policies when they restrict access to content or accounts.

(more…)

An original version of this article was previously published on Global Voices. This post is published as part of an editorial partnership between Global Voices and Ranking Digital Rights.

Graphic by Omar Momani for 7iber (CC BY-NC-ND 2.0)

Following the Charlie Hebdo shootings in January 2015, Facebook co-founder and CEO Mark Zuckerberg posted a message reflecting on religion, freedom of expression and the controversial editorial line of the magazine.

“A few years ago, an extremist in Pakistan fought to have me sentenced to death because Facebook refused to ban content about Mohammed that offended him. We stood up for this because different voices—even if they’re sometimes offensive—can make the world a better and more interesting place,” Zuckerberg wrote on his page.

Later that same month, Facebook agreed to restrict access to an unspecified number of pages for “offending prophet Muhammad” in Turkey at the request of local authorities.

Turkey is notorious for the number of requests it makes to internet companies to remove content for violating its local laws, and it is not the only government in the Middle East to resort to such tactics to silence critical voices.

While a number of the region’s governments sometimes make direct requests for content removal—along with exerting “soft” pressure through other means—the failures of tech giants in moderating content in the region further exacerbates the struggle of users to exercise their right to freedom of expression.

The issue highlights a critical need for internet platforms to be more transparent about the role that governments, private parties, and companies themselves play in policing the flow of information online.

Research from the Ranking Digital Rights 2018 Corporate Accountability Index showed that most of the world’s powerful platforms failed to disclose enough information about their content moderation policies and practices. For instance, just four of the 12 companies evaluated—Facebook, Google, Microsoft, and Twitter—provided any data about the volume and nature of content and accounts they remove for terms of service violations. Most failed to disclose how they identify content that violates their terms—and not one company revealed if it gives priority to governments to flag content or accounts that breach their rules.

Abuse of flagging mechanisms

Across the Middle East, social media platform “flagging” mechanisms are often abused to silence government critics, minority groups or views and forms of expression deemed not to be in line with the majority’s beliefs on society, religion and politics.  

In 2016, Facebook suspended several Arabic-language pages and groups dedicated to atheism following massive flagging campaigns. This effectively eliminated one of the few (in some cases, the only) spaces where atheists and other minorities could come together to share their experiences, and freely express themselves on matters related to religion. Across the region, atheism remains a taboo that could be met with harassment, imprisonment or even murder.

“[Abusive flagging] is a significant problem,” Jessica Anderson, a project manager at onlinecensorship.org which documents cases of content takedowns by social media platforms, told Global Voices.

“In the Middle East as well as other geographies, we have documented cases of censorship resulting from ‘flagging campaigns’—coordinated efforts by many users to report a single page or piece of content.”

Flagging mechanisms are also abused by pro-government voices. Earlier this year, Middle East Eye reported that several Egyptian political activists had their pages or accounts suspended and live-streams shut down, after they were reported by “pro-government trolls.”

“What we have seen is that flagging can exacerbate existing power imbalances, empowering the majority to ‘police’ the minority,” Anderson said. “The consequences of this issue can be severe: communities that are already marginalized and oppressed lose access to the benefits of social media as a space to organize, network, and be heard.”

Failure to consider user rights, in context

This past May, Apple joined the ranks of Facebook and Twitter—the more commonly-cited social media platforms in this realm—when the iTunes store refused to upload fives songs by the Lebanese band Al-Rahel Al-Kabir. The songs mocked religious fundamentalism and political oppression in the region.

A representative from iTunes explained that the Dubai-based Qanawat, a local content aggregator hired by Apple to manage its store for the region, elected not to upload the songs. An anonymous source told The Daily Star that iTunes did not know about Qanawat’s decision, which it made due to “local sensitivities.” In response to a petition from Beirut-based digital rights NGO SMEX and the band itself, iTunes uploaded the songs and pledged to work with another aggregator.

This case does not only illustrate how “local sensitivities” can interfere with decisions about which types of content get to be posted and stay online in the region, but also shows that companies need to practice due diligence when taking decisions likely to affect users’ freedom of expression rights.

Speaking to Global Voices, Mohamad Najem, co-founder of SMEX pointed out that both Facebook and Twitter have their regional offices located in the United Arab Emirates (UAE), which he described as one of the “most repressive countries” in the region.

“This is a business decision that will affect free speech in a negative way,” he said. He further expressed concern that the choice of having an office in a country like the UAE “can sometimes lead to enforcing Gulf social norm[s]” on an entire [Arab] region that is “dynamic and different.”

Location, location, location

Facebook and Twitter have offices in the UAE that are intended to serve the Middle East and North Africa (MENA), a region that is ethnically, culturally and linguistically diverse, and presents a wide range of political viewpoints and experiences. When companies are pressured by oppressive governments or other powerful groups to respect “local sensitivities,” they are being complicit in shutting down expression of such diversity.

“Platforms seem to take direction from louder, more powerful voices…In the Middle East, [they] have not been able to stand up to powerful interests like governments,” Anderson said.

Take, for example, Facebook’s willingness to comply with the Turkish government’s censorship demands. Throughout the years, the company was involved in censoring criticism of the government, religion and the republic’s founder Ataturk, Kurdish activists, LGBT content and even an anti-racism initiative.

Facebook’s complicity with these requests appears to be deeply ingrained. I spoke to a Turkish activist two years ago who told me that he believed the platform “was turning into a pro-government media.” Today, the platform continues to comply, restricting access to more than 4,500 pieces of content inside the country in 2017 alone. Facebook is not transparent about the number and rates of requests it complies with.

Research from the 2018 Corporate Accountability Index showed that while Facebook publishes some information about government requests it receives to remove content, it does not disclose the number of requests received by country or give data about the subject matter associated with these requests. This makes it impossible to determine the company’s compliance rates with these requests or the nature of the content being removed.

“The biggest shortcoming in [the] ways platforms deal with takedown requests is [their] lack of understanding of the political contexts. And even if there is some kind of idea of what is happening on the ground, I am not entirely sure, there is always due diligence involved,” Arzu Geybulla, a freelance writer who covers Turkey and Azerbaijan for Global Voices said.

In conference settings, representatives from Facebook are routinely faced with questions about massive flagging campaigns. They maintain that multiple abuse reports on a single post or page do not automate the process of the post or page being removed. But they offer little concrete information about how the company does see and respond to these situations. Does the company review the content more closely? Facebook representatives also say that they consult with local experts on these issues, but the specifics of these consultations are similarly opaque.

And the work of moderating content—deciding what meets local legal standards and Facebook’s own policies—is not easy. Anderson from onlinecensorship.org said:

‘’Content moderation is incredibly labor intensive. As the largest platforms continue to grow, these companies are attempting to moderate a staggering volume of content. Workers (who may not have adequate knowledge and training, and may not be well paid) have to make snap decisions about nuanced and culturally-specific content, leading to frequent mistakes and inconsistencies.’’

For activists and human rights advocates in the region, it is also difficult to know the scope of this problem due to lack of corporate transparency. Cases like that of iTunes may be occurring more often than is publicly known—it is only when someone speaks out about being censored that these practices come to light.

In light of growing concerns from the public and rights groups, companies should take concrete steps to be more transparent about their content moderation practices. They should publish transparency reports that include comprehensive data about the circumstances under which content or accounts may be restricted. Reports should also disclose the number of content removal requests from governments they receive per country as well as the number of such requests with which they comply.

 

Corporate Accountability News Highlights is a regular series by Ranking Digital Rights highlighting key news related to tech companies, freedom of expression, and privacy issues around the world.

Network shutdowns ordered in India to prevent exam cheating and quell protests 

Despite human rights concerns, authorities in India continue to restrict access to the internet. This month,
several network shutdowns were recorded for
a number of reasons, including to prevent exam cheating, to quell students protests, and to prevent protesters from organizing in Kashmir.

On July 20, the Manipur government ordered a five-day internet shutdown over protests denouncing financial misconduct and mismanagement by the vice president of a local university. In mid July, the Rajasthan state government ordered the suspension of internet services across the state to prevent cheating during police recruitment exams. The suspension of internet services is also expected on July 29 throughout the Arunachal Pradesh state during exams for public sector jobs recruitment. The local government ordered the shutdown “to ensure free and fair conduct” during the exams, local media reported.

The number of shutdowns in India are increasing at a ‘’staggering’’ level, according to the Software Freedom Law Center (SFLC). As of July 2018, 68 shutdowns have already been recorded. That number is very likely higher, as many shutdowns go unreported, the SFLC said.

Telecommunications companies should be transparent about their processes for responding to government requests to restrict access to networks. They should disclose information about how they handle government network shutdown demands, including under whose authority a shutdown is ordered, so that those responsible can be held accountable. None of the 10 telecommunications companies evaluated in the 2018 Corporate Accountability Index disclosed sufficient information about how they handle government network shutdown demands. Bharti Airtel, which has the largest market share in India, only provided some information as to why it may shut down service to a particular area or group of users. Vodafone was the only company to clearly disclose its process for responding to these types of government demands and to clearly commit to push back against demands when possible. Telefónica was the only company that disclosed the number of shutdown requests it received.

(more…)