An original version of this article was previously published on Global Voices. This post is published as part of an editorial partnership between Global Voices and Ranking Digital Rights.
Following the Charlie Hebdo shootings in January 2015, Facebook co-founder and CEO Mark Zuckerberg posted a message reflecting on religion, freedom of expression and the controversial editorial line of the magazine.
“A few years ago, an extremist in Pakistan fought to have me sentenced to death because Facebook refused to ban content about Mohammed that offended him. We stood up for this because different voices—even if they’re sometimes offensive—can make the world a better and more interesting place,” Zuckerberg wrote on his page.
Later that same month, Facebook agreed to restrict access to an unspecified number of pages for “offending prophet Muhammad” in Turkey at the request of local authorities.
Turkey is notorious for the number of requests it makes to internet companies to remove content for violating its local laws, and it is not the only government in the Middle East to resort to such tactics to silence critical voices.
While a number of the region’s governments sometimes make direct requests for content removal—along with exerting “soft” pressure through other means—the failures of tech giants in moderating content in the region further exacerbates the struggle of users to exercise their right to freedom of expression.
The issue highlights a critical need for internet platforms to be more transparent about the role that governments, private parties, and companies themselves play in policing the flow of information online.
Research from the Ranking Digital Rights 2018 Corporate Accountability Index showed that most of the world’s powerful platforms failed to disclose enough information about their content moderation policies and practices. For instance, just four of the 12 companies evaluated—Facebook, Google, Microsoft, and Twitter—provided any data about the volume and nature of content and accounts they remove for terms of service violations. Most failed to disclose how they identify content that violates their terms—and not one company revealed if it gives priority to governments to flag content or accounts that breach their rules.
Abuse of flagging mechanisms
Across the Middle East, social media platform “flagging” mechanisms are often abused to silence government critics, minority groups or views and forms of expression deemed not to be in line with the majority’s beliefs on society, religion and politics.
In 2016, Facebook suspended several Arabic-language pages and groups dedicated to atheism following massive flagging campaigns. This effectively eliminated one of the few (in some cases, the only) spaces where atheists and other minorities could come together to share their experiences, and freely express themselves on matters related to religion. Across the region, atheism remains a taboo that could be met with harassment, imprisonment or even murder.
“[Abusive flagging] is a significant problem,” Jessica Anderson, a project manager at onlinecensorship.org which documents cases of content takedowns by social media platforms, told Global Voices.
“In the Middle East as well as other geographies, we have documented cases of censorship resulting from ‘flagging campaigns’—coordinated efforts by many users to report a single page or piece of content.”
Flagging mechanisms are also abused by pro-government voices. Earlier this year, Middle East Eye reported that several Egyptian political activists had their pages or accounts suspended and live-streams shut down, after they were reported by “pro-government trolls.”
“What we have seen is that flagging can exacerbate existing power imbalances, empowering the majority to ‘police’ the minority,” Anderson said. “The consequences of this issue can be severe: communities that are already marginalized and oppressed lose access to the benefits of social media as a space to organize, network, and be heard.”
Failure to consider user rights, in context
This past May, Apple joined the ranks of Facebook and Twitter—the more commonly-cited social media platforms in this realm—when the iTunes store refused to upload fives songs by the Lebanese band Al-Rahel Al-Kabir. The songs mocked religious fundamentalism and political oppression in the region.
A representative from iTunes explained that the Dubai-based Qanawat, a local content aggregator hired by Apple to manage its store for the region, elected not to upload the songs. An anonymous source told The Daily Star that iTunes did not know about Qanawat’s decision, which it made due to “local sensitivities.” In response to a petition from Beirut-based digital rights NGO SMEX and the band itself, iTunes uploaded the songs and pledged to work with another aggregator.
This case does not only illustrate how “local sensitivities” can interfere with decisions about which types of content get to be posted and stay online in the region, but also shows that companies need to practice due diligence when taking decisions likely to affect users’ freedom of expression rights.
Speaking to Global Voices, Mohamad Najem, co-founder of SMEX pointed out that both Facebook and Twitter have their regional offices located in the United Arab Emirates (UAE), which he described as one of the “most repressive countries” in the region.
“This is a business decision that will affect free speech in a negative way,” he said. He further expressed concern that the choice of having an office in a country like the UAE “can sometimes lead to enforcing Gulf social norm[s]” on an entire [Arab] region that is “dynamic and different.”
Location, location, location
Facebook and Twitter have offices in the UAE that are intended to serve the Middle East and North Africa (MENA), a region that is ethnically, culturally and linguistically diverse, and presents a wide range of political viewpoints and experiences. When companies are pressured by oppressive governments or other powerful groups to respect “local sensitivities,” they are being complicit in shutting down expression of such diversity.
“Platforms seem to take direction from louder, more powerful voices…In the Middle East, [they] have not been able to stand up to powerful interests like governments,” Anderson said.
Take, for example, Facebook’s willingness to comply with the Turkish government’s censorship demands. Throughout the years, the company was involved in censoring criticism of the government, religion and the republic’s founder Ataturk, Kurdish activists, LGBT content and even an anti-racism initiative.
Facebook’s complicity with these requests appears to be deeply ingrained. I spoke to a Turkish activist two years ago who told me that he believed the platform “was turning into a pro-government media.” Today, the platform continues to comply, restricting access to more than 4,500 pieces of content inside the country in 2017 alone. Facebook is not transparent about the number and rates of requests it complies with.
Research from the 2018 Corporate Accountability Index showed that while Facebook publishes some information about government requests it receives to remove content, it does not disclose the number of requests received by country or give data about the subject matter associated with these requests. This makes it impossible to determine the company’s compliance rates with these requests or the nature of the content being removed.
“The biggest shortcoming in [the] ways platforms deal with takedown requests is [their] lack of understanding of the political contexts. And even if there is some kind of idea of what is happening on the ground, I am not entirely sure, there is always due diligence involved,” Arzu Geybulla, a freelance writer who covers Turkey and Azerbaijan for Global Voices said.
In conference settings, representatives from Facebook are routinely faced with questions about massive flagging campaigns. They maintain that multiple abuse reports on a single post or page do not automate the process of the post or page being removed. But they offer little concrete information about how the company does see and respond to these situations. Does the company review the content more closely? Facebook representatives also say that they consult with local experts on these issues, but the specifics of these consultations are similarly opaque.
And the work of moderating content—deciding what meets local legal standards and Facebook’s own policies—is not easy. Anderson from onlinecensorship.org said:
‘’Content moderation is incredibly labor intensive. As the largest platforms continue to grow, these companies are attempting to moderate a staggering volume of content. Workers (who may not have adequate knowledge and training, and may not be well paid) have to make snap decisions about nuanced and culturally-specific content, leading to frequent mistakes and inconsistencies.’’
For activists and human rights advocates in the region, it is also difficult to know the scope of this problem due to lack of corporate transparency. Cases like that of iTunes may be occurring more often than is publicly known—it is only when someone speaks out about being censored that these practices come to light.
In light of growing concerns from the public and rights groups, companies should take concrete steps to be more transparent about their content moderation practices. They should publish transparency reports that include comprehensive data about the circumstances under which content or accounts may be restricted. Reports should also disclose the number of content removal requests from governments they receive per country as well as the number of such requests with which they comply.