On April 18, Anas Qtiesh wrote about spambots targeting the #Syria hashtag in an effort to drown out speech calling for, or reporting on, protests. While this specific case has received an abundance of attention, on Al Jazeera, Fast Company, and elsewhere, it is representative of a larger problem on social networks: the use of automated accounts, or bots, targeting a search term in an effort to silence a certain type of speech.
But why does this matter? As many have pointed out, there's no censorship here; users are not seeing their content removed. The problem is that observers and journalists alike have come to rely on Twitter's search function to find information about a subject, and when a search term is targeted by automated content, valuable information–such as that about a protest–gets drowned out.
This morning I was alerted to another example: “Ahava spambots.” Ahava, an Israeli company that relies on resources in the Occupied West Bank, is the target of numerous boycott campaigns. On Twitter, accounts like @BoycottAhava share news and information about the boycott, sending tweets manually. In what appears to be an effort to drown out information about the boycott, numerous spambots have sprung up in recent days, tweeting the same statements repeatedly across multiple accounts, sometimes using the same avatar:
It has become clear that targeting a search term or hashtag is an easy and sometimes effective way to drown out important speech. And while Twitter is typically responsive, removing automated accounts from search or deleting obvious spam accounts such as those shown above, if this is truly an emerging tactic, there's a considerable risk that Twitter will not be able to keep up with the bots; one solution might be for Twitter to set into motion a new mechanism for reporting instances of such tactics. In the meantime, users should take care to discern genuine speech from the tactics of spammers.