Beware of Deepfakes: Protect your identity and avoid AI-generated phishing scams with our tips.
Deepfakes are one of the great threats for companies in the coming years. Why? The advent of Artificial Intelligence in the world has brought great advantages for companies, but at the same time it has increased the threats. Identifying them is the key to preventing them. One of them is the so-called “replicants” or ‘digital twins’ to which we have to pay more and more attention.
What are deepfakes?
Deepfakes is a word that we have begun to hear a lot in the business world. This term refers to a type of identity theft that is done using AI. These are videos that are manipulated in a very realistic way in order to change people’s faces and voices to generate a scam or other unlawful activity.
Such videos are a problem, as a person can be made to “say and do whatever they want” by cibercriminals. In recent times this type of impersonation has become the most widely used. This is due to its ability to generate all kinds of fake advertising campaigns and all kinds of fake news.
The increase in quality: a drawback
The quality could be a problem before for this kind of videos, as it was very difficult to achieve realism. Nowadays, this is no longer a problem for criminals, but for companies. The rise and improvement of AI has made it increasingly easy to produce almost imperceptible impersonation.
But is this improvement really a problem? The FBI in 2022 decided to notify companies, since they saw a significant increase in this technique for remote interviews. We have reached that point, so having a competent and efficient cybersecurity is more important than ever.
Main threats from deepfakes
These are the main threats of this type of technique to defraud companies.
Impersonation of senior officials
The phishing of senior company officials is one of the biggest threats of this type of scam. With this, they can get all kinds of shares in the company without having to go through any kind of filter.
This is often used to give orders to employees that pose a risk to the company and a benefit to the cybercriminal. For example, it could be used for a diversion of funds, or to truncate a good deal. It is important that employees are aware of such illegalities, so that they will hesitate if there is an order that does not seem quite normal to them.
Pornographic deepfakes for extortion
One of the most common practices for extorting employees is pornographic deepfakes. It may appear that a certain employee is having sex and has been recorded. What they are trying to achieve in most cases is insider company information. The biggest problem with this is that if the extortion goes on for too long, it can lead to a lot of inconvenience for the company.
Image appropriation
Today, one of the most important things a company has is its brand-value. Having shareholders or customers change the image they have of a company can be an irreparable problem. These types of techniques are used to get visible faces to come out and say things that are not appropriate or correct.
Prevention of “replicators”
The big question for companies is: is there a way to detect these videos? There is no clear method, nor is there a certain science, but there are certain aspects that we can look at to notice impersonation. These are explained below:
- The blink: replicating the human blink is very complex. Therefore, in order to identify a “replicant” or digital twin, we must pay attention to the fact that it is not a normal blinking, but a systematic one. That is, it blinks every few seconds and does so in a very unnatural way.
- The way the lips move: The movement of the lips may be one of the aspects that stand out in these AI videos. But how? We will have to notice that it does not open its mouth very wide or even that it does not close it, or that it does not dry out. It’s the little things that make the difference.
- The coordination of audio and mouth: although the quality of these videos is very high, sometimes there are clues that uncover the fake. It is important to pay attention to whether the voice matches the movement of the mouth. If the sound is not coordinated with the mouth, it could probably be a scam.
Tips to prevent this practice
As we have said, there is no certain way to avoid these practices. But even so, just as simple measures can be used to protect against the call spoofing a number of tips can be followed to deal with the threat posed by such videos.
- Training staff in these types of techniques is key to preventing them.
- Having a good communication is going to prevent them from not talking about strange orders or a video that doesn’t make sense to them.
- The implementation of AI measures to prevent and find these types of videos is essential. This intelligence can also be used to find security breaches.
- Implementing logistics zero trust is another great solution, since it only gives access to employees to carry out the assigned task at any given time. This prevents criminals from gaining easy access.
- Request that the video caller turn sideways to the camera.
- Ask personal questions that an impersonator would have difficulty guessing.
Prevention and various re-authentication measures are the main bases for protecting companies against deepfake. Updating the different cybersecurity systems and training all company employees are basic measures that must be addressed in the short term. At Pasiona, we have the training and experience necessary to strengthen companies in the face of this new threat.
deepfakes, IA, identity theft, privacy, safety
Go back