Digital apartheid and the use of social media algorithms in humanitarian crises

Photo by Alfo Medeiros from Pexels

People protesting in Palestine. Photo by Alfo Medeiros from Pexels, used under a Pexels License.

Digital apartheid is one of the many new terms we see today that address the limitations of the internet and how big tech platforms have the power to choose the content we see on our feeds, sometimes leading to the stifling and censorship of political voices standing against dissent and erasure.

Lately, users have become more aware of the wave of censorship that has been reinforced by platforms through “shadow banning,” a form of censorship restricting the visibility of certain content without notifying users, and accusations of “violating community guidelines.” These are just some of the many terms used to silence voices speaking about the current Palestinian genocide that has been happening since October 9.

History has borne witness to how humanitarian crises have exacerbated in a state of war. However, what is different about the wars we are witnessing in this age is the power of the internet and the role of big tech platforms. After Russia invaded Ukraine, we saw the role they played in amplifying the voices of people surviving the war. There was a rise of tech war activism in which social media platforms chose to pick a side and catered their services in a manner that assisted Ukraine citizens.

While this move helped elevate voices on the ground and highlighted potential war crimes happening during this time, it is worth asking whether the same standards are being applied to countries that are not part of the global north. The most recent flaring up of the Palestine–Israel conflict has brought forward a range of questions around big tech platforms choosing to amplify certain voices in certain humanitarian crises while conveniently choosing to ignore the rest.

According to an Amnesty International report in 2017, Meta’s algorithmic models and profit-making interests led to atrocities being amplified during the Rohingya Muslim crisis in Myanmar (October 2016–January 2017). Security forces in Myanmar carried out a widespread ethnic cleansing campaign on Meta’s platform, Facebook, which led to serious human rights violations on the ground and incited further hatred, violence and discrimination against the community. Hateful and harmful content increased on Facebook, and, in 2018, the company admitted to not doing enough to prevent the escalation of this content on their platforms. This led to the platform being sued in the UK and US. The case is ongoing.

More recently, according to Palestinian NGO 7amleh (The Arab Center for the Advancement of Social Media), between October 7 and November 14, there were 1447 verified violations of Palestinian digital rights. This number includes 573 cases of account restrictions or content takedowns of Palestinian users and supporters. There have also been 904 cases of hate speech, incitement to violence, and other forms of technology-facilitated online violence that have been manually verified.

7amleh developed an AI-powered language model that monitors the spread of hate speech in Hebrew against Palestinians and pro-Palestine users on platforms. The violence indicator has documented over one million classified cases of hate speech across platforms, most of which have been found on X (formerly Twitter). According to 7amleh, 68 percent of documented instances of hate speech and incitement were based on political affiliations and/or nationalist sentiments, 29 percent on racial bias and the remaining included gender-based violence, and religious violence among others.

Recently, users across the globe have noted that big tech platforms have been downplaying their organic reach, where users reported fewer people being able to view their content, particularly related to the ongoing genocide and war happening in Gaza. Numerous activists and people have reported big tech platforms like Facebook, Instagram, X, YouTube, and TikTok are shadowbanning pro-Palestinian content.

There have also been reports of Instagram removing pro-Palestinian accounts, which are a source of news for countless people online, such as @eye.on.palestine, which was earlier removed and is now reinstated on the platform, with a following of 8.8 million users. Instagram has also been under fire for adding the word “terrorist” to biographies of Instagram users describing themselves as Palestinian on the platform. The crisis is already being termed the “algorithmically driven fog of war,” with the increasing use of artificial intelligence (AI) and generative AI being used to spread disinformation against pro-Palestinian voices.

International rights groups like Amnesty International and Access Now have also issued statements about the ongoing racism and hate speech Palestinians are facing online and the need for platforms to do more in times of crisis instead of censoring and banning citizens already witnessing an ongoing war on the ground.

To curb this targeted roadblock by social media platforms, some users on the internet have been taking “algorithm breaks” and “fooling the algorithms” by posting normal stories and then, in between, adding stories of the ongoing war.

Users have also added sounds of bleeping to hide voice-overs, altered the spelling of common English and Arabic words like “Palestine,” “genocide,” and “Hamas” to evade detection, and sandwiched messages between regular posts and reels with images and videos from Gaza about the ongoing genocide to spread the word. Some users are adopting “algospeak,” where they create new words in place of keywords so they’re not picked up by algorithms and removed from platforms, an evasion tactic used to counter automated moderation on social media.

Global Voices spoke to Mona Shtaya, a digital rights defender based in Palestine, via LinkedIn, about the digital apartheid Palestinian citizens have been experiencing and how important big tech platforms are during this time. She said:

Given the inadequate and biased coverage by international mainstream media and the targeting of journalists, social media platforms should serve as a means for Palestinians to share their narrative. However, the reality differs significantly.

These platforms heavily censor Palestinian voices, shadowban Palestinians and their supporters, and infringe upon their rights to free speech, assembly, access to information, political participation, and protection from discrimination. These violations closely resemble those witnessed in 2021 [in Palestine], representing a systematic and deliberate suppression of Palestinian voices, as confirmed by the Sustainable Business Network and Consultancy (BSR) report. This highlights the platforms’ failure to uphold fundamental human rights principles.

Shtaya added how algorithmic glitches and shadowbanning of pro-Palestinian content are causing problems for people trying to share their stories with the world and call for an immediate ceasefire.

She said, “Social media censorship is suffocating us, exacerbating our suffering and the fight against systemic discrimination. It amplifies self-censorship and creates a chilling effect, ultimately compounding the oppression faced by marginalized communities.”

Regarding what audiences can do to help, Shtaya noted:

People should be aware of falling victim to disinformation and one-sided narratives. Social media censorship impedes Palestinians from sharing their perspectives. Therefore, the general public should proactively fact-check the news they receive regarding events on the ground and ensure they engage with and listen to the Palestinian narrative. Additionally, individuals should support Palestinian voices by following Palestinian accounts and sharing their content.

In times of humanitarian crisis, social media platforms have proved to be the only outlet to document events and educate people across the globe about abuses and suffering on the ground. Platforms choosing to discriminate against certain identities is a grave human rights violation and creates a digital apartheid that only exacerbates the crises.


Seerat Khan is the programs lead at the Digital Rights Foundation in Pakistan, and has done extensive work on gender and technology over the past seven years. She mostly works with women human rights defenders and women journalists on key themes like data protection, online safety, gender, privacy and misinformation.

Start the conversation

Authors, please log in »

Guidelines

  • All comments are reviewed by a moderator. Do not submit your comment more than once or it may be identified as spam.
  • Please treat others with respect. Comments containing hate speech, obscenity, and personal attacks will not be approved.