On WhatsApp, Fake News is Nearly Impossible to Moderate. Is That a Bad Thing?

Image via Pixabay. From Public Domain.

With the number of social media users in India rapidly rising, the dissemination of fake news has become a widespread phenomenon in recent years.

So-called “information overload” has made it difficult to separate the wheat from the chaff, and in some cases, misinformation spread via social media appears to have precipitated real-life violence, sometimes with fatal consequences.

In one recent incident, Twitter users in India expressed their anger when a ruling party member shared an image taken out of context, in what seemed like an effort to stoke social tensions during a riot in the Indian state of West Bengal. Several such images were circulated through social media to skew public opinion in this period. In 2015, a possibly fake image circulated via WhatsApp and was later linked to the subsequent lynching of a Muslim man in India, on the suspicion that he had slaughtered a cow.

In India, reporting misinformation to police can be a first step towards prosecuting its sender under Indian laws like Section 67 of the IT act, if the information is perceived as likely to be “harmful to young minds”, or section 468 of IPC if the news is considered “detrimental” to someone's reputation. But policies like these are hard to implement effectively, routinely running afoul of protections for free expression.

Online civil society is also increasingly proactive, with the emergence of several hoax-slaying initiatives run by do-gooders from different spheres of life who try to expose fake news for what it is. But research has shown that civilian reporting of fake news is often not swift or thorough enough to curb the problem.

At the moment, the most likely mitigators of fake news online may be the social media companies themselves. But experts are still undecided on whether or how companies might change their behaviors — by choice or by regulation — in order to diminish the problem.

Facebook's “trending” tweaks

As a major venue for the spread of fake news, Facebook has found itself at the center of this debate. After the 2016 US election, critics charged that the prevalence of false stories smearing Hillary Clinton, spread mostly on Facebook, may have shaped the outcome of the US election. These allegations triggered an ongoing debate about how Facebook might moderate misinformation on their network, along with multiple technical tweaks by Facebook, in an attempt to make its network less friendly to fake news distributors.

Most recently, Facebook updated its “Trending” feature formula. Unlike in the past, when the posts with maximum engagement appeared in the “Trending” section, now only those posts that have been shared by other “reputable sources” will appear in the Trending section. Users are also invited to contribute to the system by reporting false news stories directly to the company.

But Facebook CEO Mark Zuckerberg says it is difficult to rely on feedback from users, who may flag potentially correct content as wrong, for vested reasons. In fact, recent research seems to indicate that most people fail to distinguish between real and fake online content. This, along with the fact that most of the news that we receive on social media sites are from those in our close circles (and therefore people we generally trust), makes social media an ideal platform for propagating fake news.

The only thing that is certain is that there are major pitfalls for any entity — whether a company, a government, or an individual — that aims to separate out the real from the fake.

Thanks to encryption, WhatsApp can't moderate messages

While misinformation continues to circulate on standard social media platforms, all of the above examples from India reportedly went viral on WhatsApp. As the internet-based messaging app has become a key platform for disseminating news and information, for groups of friends and media houses alike, it has also increasingly served as a mechanism for distributing fake news.

But the picture becomes more complex when it comes to news and information spread through WhatsApp.

WhatsApp (which is owned by Facebook) is the leading messaging app for mobile users outside of the US. It is often easier to access via mobile phone than Facebook or other platforms that carry a higher volume of content and code.

But in contrast to the technology that supports Facebook, which allows the company can see and analyze what users post, WhatsApp operators have no way of seeing the content of users’ messages.

This is because WhatsApp uses end-to-end encryption, where only the sender (on one end) and receiver (on the other end) can read each other's messages. This design feature has been a boon for users — including journalists and human rights advocates — who wish to keep their communications private from government surveillance.

But when it comes to the proliferation of misinformation, this presents a significant hurdle. In a recent interview with the Economic Times, WhatsApp software engineer Alan Kao explained that WhatsApp's underlying encryption makes it difficult to tackle the challenge of fake news, as WhatsApp operators have no way of seeing what kind of information is being spread on their networks, unless it is reported to them directly by users.

Like other Facebook-owned products, WhatsApp has a policy on acceptable use which prohibits the use of the app, among other things, to publish “falsehoods, misrepresentations, or misleading statements.” But this seems more like a suggestion than a hard and fast rule. The app doesn't offer a user-friendly way to report violating content, apart from its “Report Spam” option. In its FAQ on reporting “issues” (i.e. problems) to WhatsApp, the company writes:

We encourage you to report problematic content to us. Please keep in mind that to help ensure the safety, confidentiality and security of your messages, we generally do not have the contents of messages available to us, which limits our ability to verify the report and take action.

When needed, you can take a screenshot of the content and share it, along with any available contact info, with appropriate law enforcement authorities.

While it is easy to see why the company would encourage users to report violating behavior to law enforcement, this might not render the best outcome in a country like India (alongside many others.) Indeed, there have been several cases of arrests of people who have criticized politicians on WhatsApp. And in April 2017, an Indian court ruled that a WhatsApp group administrator could even be sentenced to jail time for “offensive” posts.

No matter what, it seems there is always the risk of the powers-that-be taking undue advantage of their influence over internet activity.

Start the conversation

Authors, please log in »

Guidelines

  • All comments are reviewed by a moderator. Do not submit your comment more than once or it may be identified as spam.
  • Please treat others with respect. Comments containing hate speech, obscenity, and personal attacks will not be approved.