Increasingly, cybercriminals are using Artificial Intelligence (AI) to forge online scams by creating fake content that uses public figures to try to give veracity to their cyber fraud intentions.
These cybercriminals use technology to create deepfakes, manipulate images and voices to generate videos and audios with public figures and personalities that encourage investments, but which are actually scams created with Artificial Intelligence.
Martina López, IT Security specialist at ESET Latin America, explains that “we often see videos or audios, which adds fidelity because we are hearing from the ‘own voice’ or ‘own image’ of the impersonated personality. In addition, this content can be combined to reinforce the narrative of the scams or fake news and confuse the victims”.
The specialist details that the platforms most used by criminals to reach potential victims are social networks such as Facebook, Instagram, TikTok and also X (formerly Twitter), due to their ability to reach millions of people in a few minutes. He adds that messaging applications such as WhatsApp and Telegram are also used to share fraudulent links or messages that appear to be personal.
ESET, a leading company in proactive threat detection, recommends being attentive to the warning signs and shares details of possible fake content to which the user of these platforms should be alert:
Strange facial movements or unnatural gestures, shadows that do not correspond to the light focus of the video or image.
Mismatch between the person’s lips and the audio in the videos.
Poor quality images or sounds, with artifacts, distortions or lack of sharpness.
Unreliable sources, such as new profiles, without verification or with few followers.
Content that seeks an immediate emotional reaction, such as concern, anger or happiness.
In case of detecting any of the above, Lopez recommends verifying the source of the information, as well as its possible objective veracity: if it seems too strange to be real, perhaps it is not.
To avoid falling victim to scammers, ESET suggests cybernauts to consult directly the official profiles of the public figure or reliable pages related to the subject. In addition, on platforms such as TinEye or Google Reverse Image Search it is possible to search for screenshots of videos to verify if the same has been previously reported in another context.
This information should also be contrasted, searching in recognized media if the content has been confirmed. And the most important thing to do is to analyze the context, and ask yourself if the behavior shown in the video or audio is consistent with what is expected from that public figure.
What can we do when detecting a scam threat?
When the user detects or suspects that he/she has been the victim of a cyber-scam attempt through deepfakes, the most appropriate thing to do is to report it on the platform through which he/she received the content using the “report” functions on each platform, some of them with the justification “False information” or “impersonation”.
It is important that the complainant provides a clear description of the problem and, if possible, attach evidence. Additionally, in serious cases, local cybersecurity agencies can be contacted or the report can be filed with the competent authorities so that an investigation can be initiated.
Educate society to anticipate frauds
As AI platforms advance it will become more difficult to detect fraud. Improvements in the quality of deepfakes and the development of tools that generate personalized content could enable more targeted and effective attacks, which is why ESET emphasizes digital education.
“Training programs in schools, businesses and communities can teach people how to identify fake content and protect themselves online. In addition, awareness campaigns can help reduce collective vulnerability by informing about the most common risks and techniques used by cybercriminals”, says the ESET Latin America IT security specialist.