An internet with borders: A perspective from Pakistan

Image credit: Nick Youngson via Pcipedia.org (CC BY-SA 3.0).

This article was written by Farieha Aziz, co-founder of digital rights advocacy group, Bolo Bhi.

In Pakistan, social media has become an informational battleground for regional and domestic politics. 

We have seen foreign disinformation campaigns on Twitter, and heavy government pressure on platforms to adjust moderation policies to suit its own agenda. This has been backed by repressive new rules targeting speech, and even bans on platforms, restoring access to them only if they toe the line. There have even been instances where politicians, activists and journalists have found themselves subject to organized campaigns that flagged their social media activities for alleged breaches of the platforms’ rules, which can result in their accounts being suspended or shut down. All of this is in addition to the harassment and pressure which journalists, activists, academics, organisers and participants of women’s marches, and religious minorities in Pakistan have faced for years. This harassment is now being driven online by hashtags, but is still backed by very real risks of violence. In the push and pull, between governments and platforms, users’ interests are often left by the wayside.

Each time companies reach a middle ground with governments to keep their services operational while cloaking their actions in the interest of users they deal a blow to local advocacy efforts aiming to create a rights-based discourse that holds discretion and powers of corporations and governments to account. The popularity of a platform in any region is dependent on the number of users. For a jurisdiction to appeal as a market for investment, the numbers also matter. Users of platforms perform this function, which platforms and governments both benefit from. But when it comes to what may or may not be available for viewing on a platform, which content gets removed, on which grounds, through what process and recourse available to users, these are decisions made either unilaterally by a company or government, or collaboratively between governments and platforms. Users are not considered to be primary stakeholders by companies or governments, where decision-making is concerned. 

For years, civil society in Pakistan has battled against repressive laws and government actions limiting speech. But now, the way that companies frame and enforce their community standards is as much a cause for concern as the laws and mechanisms governments enact to restrict speech. Clearly, whether it is user-based complaints under platforms’ community standards or rules or the process of accepting “legal requests” from governments to remove content or suspend accounts, both have a considerable impact on expression. What these content moderation mechanisms entail, including how decisions are made, and the question of who decides, and whether they should be making these decisions in the first place, requires a lot more attention. But despite the enormous importance of platforms’ decisions, an enormous deficit of transparency remains.

At the 2015 RightsCon summit, civil society organizations called for expanded transparency reports by platforms. The statement read:

…Without greater qualification of the data published and clarity on the process companies follow to determine whether a request is legal or is made by a legitimate legal entity, and how the determination to ultimately restrict content or hand over user data is made, the report’s usefulness to users, researchers, journalists, and advocates is limited.

For transparency to be meaningful and useful, reports must contain more than just numbers and categories. In 2020, we’re no closer to this.

As companies enter markets around the world and, thereby, different legal jurisdictions, applicability of local laws and compliance with them places all kinds of constraints on expression. What were earlier apprehensions or considered threats to a borderless internet is a reality today. This is the age of the internet with borders.  

Government requests are typically categorised separately in transparency reports; however, if the nature of the content or accounts identified in the requests happen to violate a company’s rules, they are typically absorbed into sections of the reports that cover a company’s own actions to enforce its rules. What this fails to take into account is the resources state regulators have at their disposal as compared to private individuals, to monitor and report content. Unless all government requests are categorised separately despite falling under terms of service moderation, there will be no way of determining the volume of requests by governments and regulators, which is an indicator of their priorities and a way of holding them to account for the exercise of their powers.

Where does the end-user figure in the companies’ end game to keep their platforms up and running and governments’ attempts to exercise maximum control over them? In Pakistan, it seems the newly revised and approved―but not improved―Removal and Blocking of Unlawful Content (Procedure, Oversight and Safeguards) Rules, 2020, will only exacerbate this dilemma. Opposed by local digital rights groups and foreign platforms through the Asia Internet Coalition (AIC), the earlier version of the Rules required platforms to take down and restrict content within short time frames. Failure to comply would result in fines and a ban on services. Over the summer, the regulator issued notices under Section 37 of the Prevention of Electronic Crimes Act 2016 and blocked several apps such as, Bigo, TikTok, Skout, SayHi, Tagged, Grindr and Tinder, restoring access only when companies agreed to tailor their content moderation policies to a locally palatable version approved by the Pakistan Telecommunications Authority. News reports suggested not much has changed in the Rules―a public version was only made available on November 18, 2020. 

Caught between government overreach and corporate compliance, users are in double jeopardy. The experience of navigating platforms is becoming increasingly cumbersome for them. They are either being driven off platforms completely or forced to remain as silent spectators rather than active participants. Except for those engaged in manipulation, disinformation and abuse, platforms are fast losing their appeal, particularly to journalists and human rights defenders. They must remedy this by deploying resources to understand and address issues faced by users instead of facilitating governments more than they already do. 


This article was developed as part of a series of papers by the Wikimedia/Yale Law School Initiative on Intermediaries and Information to capture perspectives on the global impacts of online platforms’ content moderation decisions. You can read all of the articles in the series on their blog, or on their Twitter feed @YaleISP_WIII.

Start the conversation

Authors, please log in »

Guidelines

  • All comments are reviewed by a moderator. Do not submit your comment more than once or it may be identified as spam.
  • Please treat others with respect. Comments containing hate speech, obscenity, and personal attacks will not be approved.