A Technological Solution to the Challenges of Online Defamation

Screenshot from Dr. Les Sachs. (CC BY-2.0)

Screenshot from Dr. Les Sachs. (CC BY-2.0)

Written by Eduardo Bertoni, Director of CELE, University of Palermo.

When people are insulted or humiliated on the Internet and decide to take legal action, their cases often follow a similar trajectory. Consider this scenario:

A public figure, let’s call her Senator X, enters her name into a search engine. The results surprise her — some of them make her angry because they come from Internet sites that she finds offensive. She believes that her reputation has been damaged by certain content within the search results and, consequently, that someone should pay for the personal damages inflicted.

Her lawyer recommends appealing to the search engine – the lawyer believes that the search engine should be held liable for the personal injury caused by the offensive content, even though the search engine did not create the content. The Senator is somewhat doubtful about this approach, as the search engine will also likely serve as a useful tool for her own self-promotion. After all, not all sites that appear in the search results are bothersome or offensive. Her lawyer explains that while results including her name will likely be difficult to find, the author of the offensive content should also be held liable. At that point, one option is to request that the search engine block any offensive sites related to the individual’s name from its searches. Yet the lawyer knows that this cannot be done without an official petition, which will require a judge’s intervention.

“We must go against everyone – authors, search engines – everyone!” the Senator will likely say. “Come on!” says the lawyer, “let's move forward.” However, it does not occur to either the Senator or the lawyer that there may be an alternative approach to that of classic courtroom litigation. The proposal I make here suggests a change to the standard approach – a change that requires technology to play an active role in the solution.

Who is liable?

The “going against everyone” approach poses a critical question: Who is legally liable for content that is available online? Authors of offensive content are typically seen as primarily liable. But should intermediaries such as search engines also be held liable for content created by others?

This last question raises a very specific, procedural question: Which intermediaries will be the subjects of scrutiny and viewed as liable in these types of situations? To answer this question, we must distinguish between intermediaries that provide Internet access (e.g. Internet service providers) and intermediaries that host content or offer content search functions. But what exactly is an ‘intermediary’? And how do we evaluate where an intermediary’s responsibility lies? It is also important to distinguish those intermediaries which simply connect individuals to the Internet from those that offer different services.

What kind of liability might an intermediary carry?

This brings us to the second step in the legal analysis of these situations: How do we determine which model we use in defining the responsibility of an intermediary? Various models have been debated in the past. Leading concepts include:

  • strict liability, under which the intermediary must legally respond to all offensive content
  • subjective liability, under which the intermediary’s response depends on what it has done and what it was or is aware of
  • conditional liability – a variation on subjective liability – under which, if an intermediary was notified or advised that it was promoting or directing users to illegal content and did nothing in response, it is legally required to respond to the offensive content.

These three options for determining liability and responses to offensive online content have been included in certain legislation and have been used in judicial decisions by judges around the world. But not one of these three alternatives provides a perfect standard. As a result, experts continue to search for a definition of liability that will satisfy those who have a legitimate interest in preventing damages that result from offensive content online.

How are victims compensated?

Now let’s return to the example presented earlier. Consider the concept of Senator X’s “satisfaction.” In these types of situations, “satisfaction” is typically economic — the victim will sue for a certain amount of money in “damages”, and she can target anyone involved, including the intermediary.

Interestingly, in the offline world, alternatives have been found for victims of defamation: For example, the “right to reply” aims to aid anyone who feels that his or her reputation or honor has been damaged and allows individuals to explain their point of view.

We must also ask if the right to reply is or is not contradictory to freedom of expression. It is critical to recognize that freedom of expression is a human right recognized by international treaties; technology should be able to achieve a similar solution to issues of online defamation without putting freedom of expression at risk.

Solving the problem with technology

In an increasingly online world, we have unsuccessfully attempted to apply traditional judicial solutions to the problems faced by victims like Senator X. There have been many attempts to apply traditional standards because lawyers are accustomed to using in them in other situations. But why not change the approach and use technology to help “satisfy” the problem?

The idea of including technology as part of the solution, when it is also part of the problem, is not new. If we combine the possibilities that technology offers us today with the older idea of the right to reply, we could change the broader focus of the discussion.

My proposal is simple: some intermediaries (like search engines) should create a tool that allows anyone who feels that he or she is the victim of defamation and offensive online content to denounce and criticize the material on the sites where it appears. I believe that for victims, the ability to say something and to have their voices heard on the sites where others will come across the information in question will be much more satisfactory than a trial against the intermediaries, where the outcome is unknown.

This proposal would also help to limit regulations that impose liability on intermediaries such as search engines. This is important because many of the regulations that have been proposed are technologically impractical. Even when they can be implemented, they often result in censorship; requirements that force intermediaries to filter content regularly infringe on rights such as freedom of expression or access to information.

This proposal may not be easy to implement from a technical standpoint. But I hope it will encourage discussion about the issue, given that a tool like the one I have proposed, although with different characteristics, was once part of Google’s search engine (the tool, “Google Sidewiki” is now discontinued). It should be possible  improve upon this tool, adapt it, or do something completely new with the technology it was based on in order to help victims of defamation clarify their opinions and speak their minds about these issues, instead of relying on courts to impose censorship requirements on search engines. This tool could provide much greater satisfaction for victims and could help prevent the violation of the rights of others online as well.

Critics may argue that people will not read the disclaimers or statements written by “defamed” individuals and that the impact and spread of the offensive content will continue unfettered. But this is a cultural problem that will not be fixed by placing liability on intermediaries. As I explained before, the consequences of doing so can be unpredictable.

If we continue to rely on traditional regulatory means to solve these problems, we’ll continue to struggle with the undesirable results they can produce, chiefly increased controls on information and expression online. We should instead look to a technological solution as a viable alternative that cannot and should not be ignored.

Eduardo Bertoni is the Director of the Center for Studies on Freedom of Expression and Access to Information at Palermo University School of Law in Buenos Aires. He served as the Special Rapporteur for Freedom of Expression to the Organization of American States from 2002-2005.

2 comments

  • […] ‘What kind of liability might an intermediary carry?…’ […]

  • Nick

    One unintended consequence of this idea–that is, providing the ability to “rebut” nasty content online–will be to give greater prominence to that unwanted content. Search engines will interpret content that is accompanied by a rebuttal as more interesting and more relevant, and push up the ranking of the content. For those who are not public figures, especially, who don’t have pages and pages of content about them online, this could have the unintended consequence of making the problem worse.

    For now, the technological solution appears to be SEO and creation of countervailing good content. And petitioning Google to change their search algorithm–which they actually seem to have done I understand with regard to the “mug shot” plague in the U.S.

    Sooner or later, I fear, such vast numbers of people will have nasty statements about them on the internet that it will be commonplace. On the plus side, it will become correspondingly less important.

Join the conversation

Authors, please log in »

Guidelines

  • All comments are reviewed by a moderator. Do not submit your comment more than once or it may be identified as spam.
  • Please treat others with respect. Comments containing hate speech, obscenity, and personal attacks will not be approved.