Deepfakes: the fraud that keeps evolving (and is just getting started)
December, 10, 2025
4 minutes read
Deepfake fraud has gone from being a technological experiment to one of the most serious risks for businesses and users nowadays. Between 2024 and 2025, the accessibility of artificial intelligence (AI) allowed anyone to clone voices, create fake faces, or manipulate video calls in real time. This opened, in a way, a Pandora’s box, with attacks that are more convincing when compared to traditional mail or telephone fraud. Now criminals not only imitate a message or a partial part of a person: they imitate the whole person. This evolution explains the rapid increase in digital fraud incidents reported globally.
The types of fraud that have been revolutionized by deepfakes are varied, and new forms appear every year. The best known is “CEO” fraud, but now enhanced with AI, where the victim receives a call or video call from the supposed director of the company requesting urgent payments. But there are others that are just as sophisticated: family scams with cloned voices of children or parents, impersonation in banking processes or “Know Your Customer” (KYC), identity fraud in job interviews, internal fraud against employees, and even romantic scams using hyper-realistic avatars. All these attacks share the same idea: using AI to generate immediate trust and manipulate the victim.
The most important issue is that behind every attack there is very simple technology, and that is what is most worrying. In voice, models such as RVC v2, OpenVoice, or ElevenLabs allow voice cloning with just 10 to 20 seconds of audio. In addition, real-time voice conversion already has such low latency that criminals can hold fluid conversations without being detected. In video, platforms such as HeyGen, Synthesia, and D-ID allow the creation of corporate avatars, while open-source tools such as DeepFaceLive allow live impersonation during video calls. All of this means that attackers do not require advanced technical knowledge to carry out fraud.
A modern deepfake scam follows a clear pattern. First, the attacker collects public data about the victim or the person they want to impersonate, such as YouTube videos, social media images, or audio from any video. Then they clone the voice or face using extremely fast training technologies and processes. The next step is to generate a dynamic script using language models capable of adapting to the victim’s responses. Finally, they carry out the impersonation by phone or on platforms such as Zoom, Teams, or WhatsApp, among many others. At some point, the attacker pressures the victim to make transfers or give authorizations. They even plan their subsequent escape, erasing digital traces to complicate the investigation.
Today, there are also multi-agent networks that enable criminals to carry out large-scale fraud. One agent generates the deepfake voice, another analyzes the conversation and decides what to say, another modulates emotion to simulate urgency or fear, and another coordinates the sending of malicious links or payment instructions. These systems can handle dozens or hundreds of simultaneous calls, adjusting to each victim in real time. This multiplies the attacker’s reach and renders many traditional detection techniques obsolete. We are facing a new type of attack, profiling using deepfakes.
Given this scenario, mitigation requires a combination of technical controls and human processes. Companies must implement clear policies: never authorize payments by voice or video call, use secret verification codes, apply two-factor authentication through independent channels, and strengthen anti-spoofing controls in internal processes. On a personal level, it is essential to be suspicious of any urgent messages, analyze inconsistencies in voice or video, and validate directly with the person involved. Advanced tools can help, but the first line of defense remains human verification.
Platforms such as Batuta provide an essential layer of management and prioritization. Batuta consolidates inventory, vulnerabilities, configurations, identities, and endpoint telemetry, allowing organizations to understand where their true weaknesses lie and which problems need to be solved first. Its approach is to reduce opportunities for an attacker to succeed; it is to keep systems up to date, close insecure configurations, identify identities with excessive privileges, automate remediation tasks, and standardize security processes.