Deepfakes and the risks from the growing use of video fraud

Image created using Canva.

About six years ago, I wrote about a new “virus” for journalism. Despite the fact that several years have passed, there is still no “anti-virus” against deepfakes, the making of which has been advanced and perfected, and the last step is to simplify the production process of these hoax videos. Thus far, there has been a need for certain skills in using specific programs to produce deepfakes. The development of new applications based on artificial intelligence but easier to use is leading to the mass production of deepfakes. In the last presidential election in the United States of America, there were more “cheapfakes” than “deepfakes” — the main reason was that cheapfakes were easily produced, and anyone could do them. Now that this is becoming possible for deepfakes; being so easy to produce, it does not take much persuasion and calculations to say that, in important social processes, like elections, deepfakes will be the main risk in the world of information now that manipulation is becoming more feasible, in the technological sense.

Deepfakes are false visual content based on artificial intelligence in which a real person is presented saying or doing something they have never done. In the years of their existence, deepfakes have been produced for entertainment and others for awareness, but there are also cases where the manipulation of society through deepfakes is aimed at political or economic motives.

Deepfakes are becoming easier to produce all the time, which makes raising awareness about them in society urgent. A recent example in the Balkans region is the erotic video impersonating the well-known Albanian TV presenter Luana Vjollca, placing her face on a different body with the use of artificial intelligence. Albanian journalist Blendi Fevziu published a deepfake video of himself for awareness purposes, where he appears speaking Chinese, Arabic, Russian, and Portuguese. We had other deepfake videos published in the region during 2022, like the one where Elon Musk talks in Albanian or Serbian.

These are the examples that show that producing a deepfake is much easier now than a few years ago. If the lens of analysis is expanded beyond the region, then a viral deepfake was that of the famous American model Bella Hadid, in which she appeared to support Israel's actions in the war against Hamas, which she had not said she was. Hadid's deepfake had over 28 million views on X (formerly Twitter). The development of technology and easy-to-use programs based on artificial intelligence is creating another threat to the truth.

Now, the truth is on the edge of “death” because, with the increase of deepfakes in circulation, it will be more and more difficult to understand what the truth is. Considering the crisis of media credibility, journalism will be challenged even more in the near future. However, all this can be best fought with professional journalism, with journalism that adheres to professional standards and puts the public interest first. Otherwise, sensationalist and click-baiting journalism will only fuel even more mistrust of the media and lead citizens to feel insecure with the information they receive, creating a space for other actors in the information environment. In this situation, those who like to operate in the fog, to profit with a Machiavellian approach, without thinking about the damage caused to society, always benefit.

The popularization of deepfakes as a result of the development of programming based on artificial intelligence opens a new chapter in information and communication circulation, either for the sender or the receiver of the information. Due to this rapid and very important development, efforts should be made to make the verification method a simpler and faster process than it is now.

There are currently some programs that help detect this phenomenon that makes you unconsciously live in a fake reality, such as FakeCatcher and Sentinel, but there are still no open tools that would be available for everyone to use, free of charge. Also, it is important that the detection methods are advanced in order to keep pace with the progress of deepfakes, because the technology that is facilitating the production of deepfakes is developing at a rapid pace. Currently, these platforms do not detect audio manipulations, an important element in audiovisual manipulations because there are already some cases when the image is real, while the sound is generated by artificial intelligence.

Knowledge of deepfakes and awareness of this phenomenon helps society to be more critical of any video it is exposed to. Tools for detecting deepfakes are important, but countering disinformation also requires a great commitment from individuals and civil society organizations to make society aware of the existence and implications of deepfakes in everyday life.

On the other hand, a new legal framework for deepfakes is not necessary because there are already laws that regulate content intended to deceive, defame, or harm individuals, businesses, or society as a whole. However, because of the speed of their distribution, viralization, and potential harm, it is important to prioritize deepfakes.

Start the conversation

Authors, please log in »

Guidelines

  • All comments are reviewed by a moderator. Do not submit your comment more than once or it may be identified as spam.
  • Please treat others with respect. Comments containing hate speech, obscenity, and personal attacks will not be approved.