Legislating technology and the internet: Interview with Internet Society's Callum Voge

A photograph of the head and shoulders of Callum Voge, a balding man with a beard, in a white short and a red jumper, smiling and looking a little to the right of the camera.

Callum Voge. Used with permission.

Global Voices interviewed Callum Voge, Director of Governmental Affairs and Advocacy at Internet Society, to discuss the challenges of legislating technology. The interview has been edited for clarity and brevity.

Global Voices (GV): What are the hallmarks of good legislation when it comes to the internet? 

Callum Voge (CV): Good legislation with the internet happens when both policymakers and the public are informed. That's at the most simple. It's really important that digital policy is developed in consultation with experts.

Policymakers should be asking certain questions. First, is the proposal actually technologically feasible? Then second, are there any consequences of this legislation that might not be obvious — unexpected negative consequences? And if those do exist, how can they be rectified? How can they be addressed?

At the Internet Society, we developed something called the Internet Impact Assessment Toolkit to help with this process, by identifying what the internet needs to exist and what it needs to thrive. Just as it is standard in many countries to do an environmental impact assessment when a big project is launched, this should be done for internet or digital policies too. We're very happy that it has been used, for example, by the EU recently.

There are four main principles that we, with our community, identified as key for the internet: it must be open, global, secure, and trustworthy. Some other features that describe what the internet needs to thrive and reach its potential are: accessible, unrestricted, collaborative, decentralized, common, technology-neutral, confidential. These are key principles.

We are glad to see governments, activists and civil society using this toolkit. And then, through the policymaking process, we hope for positive change. The goal, of course, is to move the internet closer to a resource for everyone that benefits everyone.

GV: Making legislation on the internet is complicated by balancing very important but different — almost contradictory — things, like allowing free expression but limiting disinformation and hate speech. What are your thoughts on walking this tightrope?

CV: It's a really tricky thing, right? Because the issues that we're seeing as you mentioned, disinformation or harmful content online horrible things like child sexual abuse, they're there and they need to be addressed in some way. And of course, we're supportive of governments trying to find ways to make the internet safer.

But, at the same time, some of the approaches are, from our view, not proportional and not effective. Two that we've been doing a lot of work on are the UK Online Safety Bill and the EU proposal to counter child sexual abuse online.

Both of these have very positive goals and are at different places in the legislative process. The UK Online Safety Bill was passed in September and has gone for royal assent to become law. The EU process is at an earlier stage; the council and the parliament and the commission will negotiate and try to find a common ground.

But, with both of these proposals, we see new obligations for platforms in the form of detection orders. This means that platforms would be pressured to either weaken encryption or to create encryption backdoors or something called client-side scanning to gain access to private messages. Their view is that private messages need to be monitored, and they need to moderate this kind of content.

That is a very scary ask and one that is not restricted to countries with nondemocratic governments.

The idea with “encryption backdoors” is that a key is created for the government so that they can decrypt messages and data, like a TSA lock that can be opened by the TSA but remains locked otherwise — as if the government has never been known to poke its nose into people's business. Even if you really trust your government would not abuse those powers, there are other issues, too, because what we always try to hammer home is that when you create an encryption backdoor, that's a systemic weakness in the encryption. So it means that not only the government but also criminals can exploit this weakness. And the government should also worry about so-called hostile state actors — other governments also exploiting those weaknesses. What we always try to emphasize is that there's no such thing as a backdoor that only works for the good guys and not the bad guys. A backdoor is a backdoor.

The UK government claims that there's a way to do this safely without violating end-to-end encryption. The technologists disagree. There's no known technology to do that. If you break encryption, you break it. It's as simple as that.

GV: Are there any alternatives to encryption backdoors?

CV: An alternative to the encryption backdoor is client site scanning. Basically, it's a system where something is embedded on a user's device, and it scans the content of your text, images, files, and so on, and compares it to a database of objectionable content before messages are even sent. Law enforcement is notified when there’s a match.

Policymakers are pointing to this because they say it does not break encryption. And they're maybe right on the technicality that the scanning happens on our phones before encryption even starts, but it defeats the whole purpose of encryption. The metaphor we like to give is that if breaking encryption is like steaming open a letter, a physical letter when it goes through the sorting office at the post office, client-side scanning is like someone reading your letter as you write it over your shoulder.

So, in the end, the result is the same. Your privacy is dead.

What’s more, this isn’t even an effective way to scan. For one thing, real true criminals will circumvent it easily. It's normal people that will all be scanned. And then the second part is that more data doesn't mean more arrests or more convictions because more data also needs more resources for it to be processed.

These threats to encryption and to private messaging, they're being led by established democracies, including the UK and the EU. This is really dangerous because it normalizes and legitimizes this approach, and other governments around the world will pass similar laws, and more repressive governments can use those powers to crack down on dissidents, journalists, and activists.

GV: What are your thoughts on AI and AI regulation? Do you have thoughts on how we should regulate or does it just go back to the main two things you said at the beginning, which is to talk to experts and ask them what will work and what are the implications?

CV: It's really interesting because, as we know, artificial intelligence does present opportunities, but also lots of challenges and lots of risks. And of course, for us, the interest is that If AI is not trustworthy, that will also make the internet untrustworthy. We see it very much interlinked. As this is such a fast-moving area, unfortunately, the Internet Society does not have the capacity to lead in this policy area, but we recognize it as an important area because, if ethics and values are not considered when AI is being developed, that will decrease trust in the internet.

I think diverse views are needed because one of the big issues a lot of people identify with is the potential for bias and also repeating the biases that exist in society. To really understand these biases, we need to speak, of course, to the technologists, of course, to the policymakers and interest groups, but especially to groups that are disadvantaged within our societies and our communities to understand, if technology would replicate those. It's all about the ethics, and preventing these kinds of mistreatments from being repeated in digital space.

GV: Have you seen any trends in legislation, where it has been going over the last three years?

CV: Yeah, definitely. There is the risk of internet fragmentation, as different countries pass different laws governing the internet, and it will be increasingly difficult to offer the same services across the board. A really connected trend we're seeing is digital sovereignty, which is a buzzword that many governments have used — there are different variations: digital sovereignty, internet sovereignty, sovereignty in the digital space, it's all synonyms used by different governments.

It means different things to different governments. For some countries, it simply means diversifying their supply chain; they don't want to rely on imported technology. For other countries, it's more about competition —  in the European case, they want to make sure that local service providers do exist and that there are legitimate alternatives to the big foreign providers.

But then the final definition, which is, of course, more common with more authoritarian governments, is state sovereignty in the digital realm —  the ability to control the flow of information on the internet. Whatever the reasons for it, it is ultimately the state deciding what is legitimate and what is not, sometimes through the lens of what is a threat to their legitimacy as the government.

So when we talk about our digital sovereignty policies as naturally good or bad, it's impossible to say because there's such a wide range of policies that come out of it. What we can say, though, is that this trend is concerning. It takes the existing global internet that we know and rely on and applies geographic boundaries to it. As this view becomes normalized by policymakers, we can expect to see new policies that would turn our global internet into a series of fragmented intranets that do not fully connect with each other.

Start the conversation

Authors, please log in »

Guidelines

  • All comments are reviewed by a moderator. Do not submit your comment more than once or it may be identified as spam.
  • Please treat others with respect. Comments containing hate speech, obscenity, and personal attacks will not be approved.