The advent of the internet marked a radical shift in how people publish and share ideas — breaking boundaries and removing intermediaries. But this change came with the new burden of regulating and setting policies for content moderation.
In Sudan, a country where just 31% of the population has access to the internet, online platforms are struggling to enforce guidelines and regulations to monitor content deemed harmful such as hate speech and disinformation.
Meanwhile, technology companies based in the United States, such as Facebook, Instagram, and Twitter, often do not adhere to already published measures. In many cases, these US-based platforms fail to address harmful content at all.
Doxxing, impersonation and disinformation
In May, Lugain Mohamed, a Sudanese women's rights activist, reported an active Instagram account that featured images of Sudanese women without their permission. This violated Instagram’s community guidelines, which do not allow users to share photos they did not take or do not have the right to share.
The Facebook-owned platform has yet to take action and the account is still active because Instagram found that it does not violate its policies, she confirmed to Global Voices in an email interview. She added that due to this experience she started to self-censor more the content that she shares on social media:
Having been threatened by the admin of the page to share the rest of my pictures if I continued asking people to report the page made me a bit hesitant about continuing to campaign against such pages, as it threatened my personal safety.
This practice of sharing women's pictures without their consent is not new in Sudan.
In 2016, over 15 female activists were doxxed on a Facebook page called, “Sudanese Women against the Hijab.” Doxxing is the practice of publishing someone's personal information without their permission with the intention to threaten or intimidate. The activists’ social media pictures were posted without their consent alongside fabricated quotes about being against the headscarf and religion. The page was later removed by Facebook after many reported it as a violation of its community standards.
During Sudan's revolution that began in December 2019, Twitter seemed to be an ideal space for impersonation: Bad actors hijacked the accounts of politicians, ministers, journalists, and activists. Journalist Wasil Ali launched a campaign asking the public to report these accounts.
As a result, some were blocked while others are still active. In an email interview with Global Voices, Ali wrote:
…[F]ake accounts are, for the most part, used to harvest followers but unfortunately, a good number of them are used to sow division by spreading misinformation that has the potential to trigger unrest or even violence. Also in the simplest of terms, it would sow discord among Sudanese whether political or even tribal.
In June 2019, the government-backed militia group known as the Rapid Support Forces (RSF) cracked down on protesters opposing military rule in Khartoum, in what was later referred to as the “Khartoum massacre.” Human Rights Watch and Amnesty International published reports detailing evidence that pointed to a series of human rights violations by the RSF militia.
Yet, days after the crackdown, an Egyptian company called New Waves launched an influential campaign on social media — including platforms Facebook and Instagram — aimed at presenting and marketing a good image for the RSF militia and its leaders.
To this day, Facebook has failed to address multiple requests to remove RSF militia content, arguing that its leader, Mohamed Dagalo, who also serves as a deputy head of the Sovereignty Council, is a current state actor, albeit the company already removed the account of a Maynamar official who is also wanted for committing war crimes.
Are platforms doing enough?
In some cases, directly contacting platforms helps to eliminate harmful content such as a fake Instagram account that went viral last year during the revolution. The account purported to provide meals to Sudanese people and also spread misinformation. Instagram later removed the account, directly after The Atlantic, a US-based news outlet, contacted them.
In July 2018, YouTube shut down Zoal Cafe Channel, without issuing any statement, after users reported an earlier episode that was a rebuttal to a Kuwaiti TV show mocking Sudanese people. The rebuttal itself was considered by some as racist. The channel was shut down only for three months before it was reinstated.
Facebook also took steps to implement its Inauthentic Behavior Policy in relation to Sudan. The company defines “inauthentic behavior” as “engag[ing] in behaviors designed to enable other violations under our Community Standards,” through tactics such as the use of fake accounts and bots.
In October 2019, a network of fake accounts linked with Yevgeniy Prigozhin, a Russian financier, who was later put on a sanctions list by the US state department for his role in providing “support for preserving authoritarian regimes, such as that of former Sudanese President Omar al-Bashir, and exploitation of natural resources,” were removed by Facebook. According to their statement, “17 Facebook accounts, 18 Pages, 3 Groups and six Instagram accounts that originated in Russia and focused primarily on Sudan’’ were removed.
However, activists say that platforms are not doing enough.
Lugain Mohamed said that she managed to get her own picture taken down, but not others because Instagram's policy requires “the owner of the picture to report it themselves.” She says:
This is problematic in many ways, as these accounts are growing in number and followers day by day and they’re making a profit out of taking women’s pictures by advertising for different companies.
As for Twitter's response to Ali's campaign to request impersonation accounts, Ali wrote:
Twitter is extremely slow on cracking down on these accounts & sometimes unwilling to take them down (like with an account impersonating me & tweeting fake news). Twitter simply refers you to their policy on these accounts.
Twitter still doesn’t provide the phone verification feature for its users in Sudan, adding a barrier to verify accounts which gives space for more fake accounts to exist. In June, an online campaign was launched requesting Twitter to offer this feature. In March 2018, Jack Dorsey, the CEO of Twitter, tweeted about this issue, but Twitter has not changed its position.
Lack of legal protections
In addition to platforms’ inaction, Sudan also lacks strong legal measures that protect users online.
A number of existing legal provisions in Sudan offer protection for some types of harmful content. For example, the 2007 IT Crimes Law forbids defamation and sanctity of personal life, which can be used to hold to account those sharing photos of others without their consent.
The 2018 IT crimes law, which its final draft was not shared publicly, criminalises the use of “the internet or any communications or information means, to incite hatred against foreigners, causing discrimination and hostility.” The parliament of the deposed regime amended the draft law in 2018, before it was amended again in 2020 by the transitional council in charge of governance, but the Ministry of Justice has not yet shared the final full version.
Additionally, Article 87 of the Postal and Telecommunication Act punishes those who send threatening content.
Regulating this content proves challenging and sometimes becomes a threat to freedom of expression, particularly in a country with a long history of human rights abuses.
Sudan has several vague laws that criminalize speech guaranteed by international human rights standards. According to the 2019 Freedom on the Net Report, the government “openly acknowledges blocking and filtering websites that it considers “immoral” and “blasphemous,” such as pornography sites”.
In January 2019, the state prosecutor issued arrest warrants against 38 journalists and activists, accusing them of spreading fake news, a term that was vaguely described in the IT crimes law.
Adequately addressing online harmful content in Sudan requires collective action.
Tech companies and online platforms need to adhere to content regulation policies and make them transparent and visible to users. They must listen to local activists and take into account their concerns when implementing policies.
Sudan's current administration must amend existing laws to protect users from doxxing and hateful speech, without further endangering users’ fundamental rights and freedoms.
This article is part of a series called “The identity matrix: Platform regulation of online threats to expression in Africa.” These posts interrogate identity-driven online hate speech or discrimination based on language or geographic origin, misinformation and harassment (particularly against female activists and journalists) prevalent in digital spaces of seven African countries: Algeria, Cameroon, Ethiopia, Nigeria, Sudan, Tunisia and Uganda. The project is funded by the Africa Digital Rights Fund of The Collaboration on International ICT Policy for East and Southern Africa (CIPESA).