At a crucial time for global crises, X (formerly Twitter) dilutes its violent speech policy

On October 25, 2023, the social media platform X, formerly known as Twitter, made changes in their global Community Guidelines policy.  According to the Platform Governance Archive,* a data depository that automatically tracks policy changes on 18 platforms, X significantly softened its violent speech policy. In 2023, violent speech is defined by X (and Twitter in the past) as violent threats or wishes of harm. 

Overall, these changes mean that X is significantly decreasing both the scope of its provision on violent speech as well as the consequences for when such speech is detected. X seems to be signaling to its users that penalties for posting violent content will now be applied to a narrower scope of expression, and that the penalties themselves may be less severe.

These changes are happening as X is suing the Center for Countering Digital Hate (CCDH), a non-profit that fights hate speech and disinformation, accusing it of asserting false claims. As reported by Reuters, the lawsuit follows a report published by media using CCDH research in July 2023 that stated that hate speech towards minority communities on the platform had risen since Elon Musk bought the company in October 2022. The Guardian also recently noted that the EU, under its Digital Services Act (DSA), has warned X for allowing “alleged disinformation about the Hamas attack on Israel” on its platform, while 7amleh — the Arab Center for the Advancement of Social Media, has documented escalating online discrimination, racism, incitement, and misinformation on X.

The changes that X has implemented include:

From zero tolerance policy to merely reducing visibility

Earlier, X's Community Guidelines said:

X ​​is a place where people can express themselves, learn about what’s happening, and debate global issues. However, healthy conversations can’t thrive when violent speech is used to deliver a message. As a result, we have a zero tolerance policy towards violent speech in order to ensure the safety of our users and prevent the normalization of violent actions. [emphasis added]

Now, it reads:

X is a place where people can express themselves, learn about what’s happening, and debate global issues. However, healthy conversations can’t thrive when violent speech is used to deliver a message. As a result, we may remove or reduce the visibility of violent speech in order to ensure the safety of our users and prevent the normalization of violent actions. [emphasis added]

The change from “zero tolerance” to “we may remove or reduce the visibility” is illustrated further on in the renewed Community Guidelines.

From suspending accounts in “most cases” to only “certain cases”

Earlier, X's penalties were severe for most violations of its policies:

in most cases, we will immediately and permanently suspend any account that violates this policy. For less severe violations, we may instead temporarily lock you out of your account before you can Post again. [emphasis added]

The new version has been softened for most violations:

in certain cases, we will immediately and permanently suspend any account that violates this policy. However, for most violations, we may instead temporarily lock you out of your account before you can Post again. [emphasis added]

From firmly suspending to maybe suspending the account

Elsewhere X's Community Guidelines, also state that violent content, in rare cases, may be made less visible but that continued violations of the policy after receiving a warning, may result in accounts being “permanently suspended.”  The previous version of this policy said that accounts will definitely “be permanently suspended.”

From evaluating the context to not evaluating the context 

The company also reformulated its definition of content that is exempt from its violent speech policy, stating that “We also allow certain cases of figures of speech, satire, or artistic expression when the context is expressing a viewpoint rather than instigating actionable violence or harm.” This represents a more precise description of exempted content. The previous policy said “We make sure to evaluate and understand the context behind the conversation before taking action.”

Two weeks ago, Meta's Facebook also changed its policy on sadistic, imaginary, and violent content. However, if anything, it has been further restricted

At the moment, both Meta and X, as well as other very large online platforms (YouTube, TikTok) are going through a EU's Commission compliance investigation under the DSA. The EU Commission issued requests to X, Meta and TikTok, asking them to report how they mitigated the dissemination of illegal content and disinformation regarding the Israeli-Palestine conflict 

The DSA, which entered into force in November 2022, obliges all very large online platforms in the EU to mitigate societal risks, including “illegal and harmful” speech.  There are currently 17 designated very large platforms, each with at least 45 million users per month. At the same time, however, some civil society organizations in Europe voiced their concerns about using the DSA as an instrument to pressure platforms to swiftly delete content in the times of crisis, which could potentially be a violation of free speech and undermine human rights.  

*The Platform Governance Archive (PGA) is a data repository and platform that collects and curates policies of major social media platforms in a long-term perspective. It is maintained and curated by the Platform Governance, Media and Technology Lab and the Center for Media, Information, and Communication Research (ZeMKI), University of Bremen, Germany.

Start the conversation

Authors, please log in »

Guidelines

  • All comments are reviewed by a moderator. Do not submit your comment more than once or it may be identified as spam.
  • Please treat others with respect. Comments containing hate speech, obscenity, and personal attacks will not be approved.