deepfakes

What are deepfakes and how to detect it?

0 0
Read Time:3 Minute, 25 Second

Deepfakes: What are they?  

The term “deepfakes” comes from the combination of the concepts “deep learning” and “fake”, alluding to the fake content generated with deep learning (AI) 

This refers to a spoofing technique, which uses an advanced artificial intelligence technique that collects data on physical movements, facial features and even voice, and processes it through an AI encoding algorithm or a Generative Antagonistic Network (GAN), to create fake but hyperrealistic audiovisual, graphical or voice content.   

In other words, it is a video, image or audio created with AI, which imitates the appearance and/or voice of a person (synthetic media). Among the most popular deepfake videos, videos include those of user DeepTomCruise on TikTok:  

It is not a completely new technology. In fact, it has been used for years in Hollywood film studios, but is now available to many people through commercial applications, which has increased the volume of such content circulating on the web, although Facebook banned Deepfakes in 2020 (with the exception of those that are clearly parodies).   

Why can deepfakes be a threat to digital and information security?  

The problem is not in the deepfake, which is just content generated with AI, but in the way it is used. A recent Europol report warns that most of the deepfakes spread have malicious intentions. The malpractices that this technology could facilitate include:     

  • Violating the moral integrity of persons (e.g., by making pornographic video montages).   
  • Manipulation of images and sounds to circumvent biometric passwords.  
  • Fraud on digital platforms.  
  • Spread of fake news and disinformation, which could even disrupt financial markets and destabilise international relations.   
  • Identity theft.  
  • Extortion (threatening the person with distributing false compromising content).   

More worryingly, as the technology becomes less expensive, the number of offences committed may increase. Europol expects this, which is why it recommends understanding deepfakes and being prepared.   

How to detect a deepfake?  

It is becoming increasingly difficult. According to a recent study published in Proceedings of the National Academy of Sciences USA, “synthetically generated faces are not only photo-realistic, they are almost indistinguishable from the real thing and are considered more reliable”. However, it is not impossible. There are still details to pay attention to:   

Number of flashes  

By paying attention to the number of times the image in the video flashes, we can find out if it is a real person or a deepfake, as deepfakes tend to do this less often than people, sometimes in a forced or unnatural way.   

Face and body   

Generating forgeries of a person’s entire persona involves quite a lot of work, so most deepfakes are limited to face substitutions. So, one way to detect forgery is to identify incongruities between the proportions of the body and face, or between facial expressions and body movements or postures.   

Video length   

A quality fake requires several hours of work and training of the algorithm, so fake videos are usually only a few seconds long.   

Video sound  

Software exists to create voice fakes, but it is often limited to changing the face. Be suspicious if the video has no audio or has audio that does not match the image, especially lip movement.   

Inside the mouth  

The technology to generate deepfakes is not very good at faithfully reproducing the tongue, teeth and oral cavity when the person speaks. Therefore, blurs inside the mouth are indicative of a false image.   

Other details   

Details are the weak point of deepfake software. Therefore, we can spot them by focusing on small aspects, such as dull shadows around the eyes, unrealistic facial hair, overly smooth or wrinkled skin, fictitious moles and unnatural lip colour.   

Using technology   

In processes that require more thorough verification, deepfake detection software or online life detection systems (e.g., taking a selfie or video link in real time) can be used. In this way, the risk is greatly reduced.   

In the end, as with all threats arising from the digital world, people’s judgement is the link we need to strengthen the most. They need to be wary of suspicious content and learn in detail how to detect deepfakes. 

Source: Telefónica

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %