This post draws on stories originally reported by Fernanda Canofre, Sahar Habib Ghazi, Ellie Ng (via Hong Kong Free Press), Dalia Othman, Inji Pennu and Thant Sin.
During the 2011 Arab Uprisings, Facebook proved itself to be one of the most powerful technological catalysts for free speech and democratic mobilization that the world had ever seen. While it did not cause the uprisings, it was a critical driver of their growth.
In that same year, the number of Facebook users in Africa, Asia, Latin America and the Middle East (i.e. the “Global South”) surpassed the number of users in Europe and North America. From this moment on, it was truly a global platform, despite being a US company.
Since this time, governments and other actors have also awoken to the fact social media can serve their own interests and gains, from monitoring people's activities and behavior to promoting political campaigns to rallying lynch mobs.
The Global Voices community knows these dynamics all too well. As a community of writers and activists, we’ve faced censorship, harassment and direct threats because of our activities on Facebook since the early days of the platform. We’ve been writing about these experiences for more than a decade, and we've conducted special research on Facebook's products, including Instagram and Free Basics.
We also know that for Facebook, and for anyone trying to understand how tech platforms and policies interact with free speech, privacy and other civil and political rights, past experience is instructive.
Here is a look back at some of our most influential coverage of hate speech, harassment, and political censorship on the world’s largest social network.
For a full list of past stories, visit our Facebook coverage archive.
For Indian activists, “real” names can have real-life consequences
In 2015, after a woman activist in southern India became a top target for sexual harassment and threats of violence on Facebook, her account was suspended. Someone had reported her for violating the company's “authentic identity” (or “real name”) policy. With no warning, she was instantly locked out. And the only way she could regain access to her account was by sending Facebook some form of official ID. With no other option, this is what she did.
Facebook reinstated her page using her full name, which included her caste name. She had never used her caste name on her Facebook page, or anywhere else in her public identity. This left her more exposed and subjected to harassment than ever before.
In concert with a coalition of digital rights and LGBT groups, Global Voices co-authored an open letter to Facebook identifying the multiple issues that this case raised, concerning the abuse of Facebook's systems, and the company's lack of cultural sensitivity on the question of what constitutes a “real name” or “authentic identity”.
Today, users can no longer be instantly suspended over a single report of “authentic identity” policy abuse. But the company still has a long way to go in resolving the question of how to respect the personhood users who are not known by their legal names.
This work taught us a great deal about the complexities of identity within the internet. How does a technology determine who is a “real” person? How do ideas like citizenship and nation take shape online, especially when ethnic and territorial disputes are in play?
Our coverage of Palestine and Israel regularly touches upon these questions, both online and off.
Palestine: Hate speech and the digitization of occupation
During the 2014 war in Gaza, a Facebook page called “Until our boys are returned – we will kill a terrorist every hour” became immensely popular. The page featured multiple posts in Hebrew calling for violence against Palestinians and Arabs, including a post that called on readers to “burn Gaza” and bring “death to the Arabs.”
Despite many formal abuse reports filed by Facebook users, the page was not taken down for more than three weeks. When Global Voices writers spoke about it with Facebook staff, they did not directly address the page in question. They simply reiterated their commitment to their Community Standards.
Since this time, we have seen periodic media coverage of meetings taking place between Facebook staff and Israeli government representatives. What little information we have has left us concerned that Facebook may be employing a double standard on behalf of the Israeli government. A rapid rise in arrests of Arab and Palestinian Facebook users for their postings has contributed to these concerns as well.
In Myanmar, Facebook should ‘focus on context, rather than code’
In Myanmar, social media networks in the country exploded with hate speech, fake news photos, and racist narratives when the Myanmar military clashed with Arakan Rohingya Salvation Army (ARSA) in August 2017 and launched ‘clearance operations’ in the villages of Rakhine state, forcing hundreds of thousands of Rohingya Muslims to flee the country.
During this time, ample anti-Rohingya propaganda spread online. Rohingya people and others who sought to protect them faced direct threats of violence on Facebook. As has been widely reported since the Zuckerberg hearings, when Burmese civil society groups asked Facebook to help by removing these threatening posts, the company was painfully slow to respond.
One tactic Facebook has tried to deploy in the country was an automatic censorship technique that removed all posts containing the word “kalar” or ကုလား (in Burmese script), a term used by ultra-nationalists and religious fundamentalists to attack Muslims in Myanmar. Users in Myanmar discovered this tactic when they found any post containing the word — including those discussing use of the word, or even posts with words that contained the word kalar (ie “kalarkaar”, which means curtain) — had been removed and labeled as hate speech.
In response, one of Global Voices’ local contributors wrote: “instead of simply deciding to censor the word “kalar”, [Facebook] should have reviewed and learned from ongoing initiatives that aim to combat online hate speech in Myanmar that focus on context, rather than code.”
Censoring Tiananmen: Facebook activism in Hong Kong
The “Special Administrative Region” of Hong Kong represents another complex territory when it comes to the adjudication of speech on social media. While the government in mainland China employs an aggressive censorship regime in which Facebook is blocked altogether, the network is accessible and popular in Hong Kong, especially among pro-democracy activists.
The distinction between these territories is regularly tested when citizens attempt to discuss politically sensitive topics. The 1989 massacre of student protesters in Beijing's Tiananmen Square might be one of the most enduring of such topics.
In 2017, our partners at Hong Kong Free Press co-published with us a story of about Fung Ka Keung, a leader of Hong Kong's teachers’ union who created a temporary profile picture frame commemorating the mass killing of student protesters in Beijing's Tiananmen Square, in 1989.
Within 24 hours, Fung Ka Keung received a notification from Facebook saying that his frame was rejected, for failing to meet the company’s terms and policies. Fung received a message from Facebook explaining that the frame “belittles, threatens or attacks a particular person, legal entity, nationality or group.”
After the incident was reported in local media, the social media giant issued an apology and approved the original frame. Why did Facebook reject the frame? Many speculated that it might was not just a simple error, but rather an attempt to kow tow to mainland China, where Facebook has been blocked since 2009.
Alongside activism and content that is intentionally political, stories or even rumors on Facebook can escalate to situations of vigilante violence or real-life harm. Our final story looks at one such incident that took place in Brazil in 2014.
Killed by a lynch mob, and a false rumor
In Brazil, Fabiane Maria de Jesus died at the hands of a lynch mob driven by a series of vicious online rumors, which rapidly escalated on Facebook.
Alerts about a woman who allegedly had been kidnapping children in the seaside resort town of Guarujá, in Brazil, were sent to 24,000 people through the Facebook page Guarujá Alerta (Guaruja Alert). The alert included a sketch, which closely resembled de Jesus. When one user erroneously suggested that the woman in the sketch was de Jesus, online outrage escalated into a real-life lynch mob.
Local police had no records of missing children at that time. The sketch was from a different child kidnapping case from 2012 in Rio de Janeiro and had appeared, also on Facebook, in several different contexts, and was falsely linked to crimes in other Brazilian states.
According to A Tarde newspaper, a group of friends of one of the suspects united to protest in front of the police department. The group yelled:
Quer prender todo mundo? A culpa é de todo mundo! A culpa é de ninguém! A culpa é da internet!
Do you want to arrest everybody? It’s everybody’s fault! It’s nobody’s fault! It’s the internet’s fault!
For a full list of past stories, visit our Facebook coverage archive.