Rafal Hyps, CEO of Sicuro Group, explains why organizations and individuals need to pay close attention.
When “seeing is believing” no longer applies
AI avatars look and sound eerily realistic. That realism is what makes them powerful and dangerous.
“AI avatars can bypass identity verification systems that are not designed to detect synthetic media,” Hyps says. “Scams powered by deepfakes are already being used to impersonate executives on video calls to approve payments. Most organizations have not updated their verification processes to account for this.”
Simply put, many companies still trust what they see on the screen. However, verification systems are designed to confirm that a real person exists, not to detect whether the “person” is an AI-generated replica.
That gap is now being exploited.
How are fraudsters using AI avatars?
The tools needed to create convincing fake identities are no longer limited to elite hackers. They are becoming increasingly accessible and easy to use.
“An attacker can generate a convincing avatar of a senior executive, feed it through a virtual camera during a video call, and pass standard survival checks,” Hyps explains. “AI tools can also generate fake identities that include matching selfies or videos. These methods are already in use and available.”
This means traditional safety measures like blinking, turning your head, and holding up your ID may not be enough.
Risk is not theoretical. Payment approvals, internal approvals, and sensitive business decisions are often made with video or audio confirmations. Compromise of these channels can result in significant financial and reputational damage.
Realistic and cartoon avatars: is there a difference?
Many believe that only surreal avatars pose a threat. Stylized or cartoon-like avatars look playful yet harmless.
But the risks are even more serious.
In fact, realistic avatars are designed in such a way that the output passes as a real person, creating a direct risk of impersonation. “While stylized or cartoon avatars look harmless, platforms still require the same biometric input to generate them. The risk with stylized avatars lies not in what they generate, but in what data is collected to create them,” Hyps adds.
Even if the final image looks like an animation, the system may rely on detailed facial scanning and biometric mapping behind the scenes.
And that brings up another long-term concern: data security.
Biometric data issues
Most AI avatar platforms require users to upload a facial image. Some go even further.
When asked what data would be collected, Hyps said at least facial images would be required. “Many platforms also capture facial shape and expressions to generate output. Providers generally state that the images are analyzed and discarded, but this is not a regulated standard.”
The lack of consistent regulation is worrying.
“There have already been major breaches of biometric databases around the world, exposing millions of facial recognition records,” he warns. “The reason this is more important than a typical data breach is because compromised biometric data cannot be reset. You can change a stolen password, but you cannot change a compromised face.”
This is the key difference between biometric data and other personal information. You can update your password. You can cancel your credit card. But faces cannot be replaced.
If facial data is leaked or misused, the consequences can last a lifetime for someone.
Profiling without permission
Another hidden risk lies in publicly available images. Many professionals post high-quality headshots on their company websites, LinkedIn profiles, or social media accounts.
According to Hyps, these images can be used without consent.
“Yes. Public photos from a company’s website or social media can be used to generate avatars or train facial recognition models without your knowledge. That data can be combined with other public information to build detailed profiles.”
In other words, someone doesn’t need to hack into your private files to exploit your likeness. One publicly available photo may be enough to create a composite version of you.
When combined with other online information, such as job title, company, location, etc., you have a very convincing impersonation.
Can I reverse engineer my avatar?
The risks go beyond identity theft.
Research shows that AI-generated avatars can reveal more information than users realize. Hyps explains that with enough biometric data, an identity can be narrowed down or matched against existing databases. Tools to do this exist and are becoming more accessible. ”
This means that even seemingly innocuous digital versions of ourselves can be analyzed and matched to facial recognition systems.
As tools become more widespread, the barrier to exploitation lowers.
Technology advances faster than regulations
AI-generated avatars offer creative and commercial opportunities, from digital marketing to virtual influencers and personalized content. But as Hyps makes clear, the security framework surrounding them has not kept up.
The central problem is not the technology itself. It reflects how quickly verification systems, regulations, and corporate policies are being adopted compared to how slowly they are evolving.
For businesses, that may mean rethinking how they verify identities during high-risk decisions. For individuals, it may mean being more cautious about where and how their facial data is shared.
In a world where faces can be generated, voices can be replicated, and identities can be simulated in real time, one old rule no longer applies.
Seeing is no longer believing.
So what can you actually do?
AI avatars aren’t going away. Technology will continue to improve, becoming faster and more convincing. However, this does not mean that individuals and organizations are powerless.
Here are some practical steps to reduce your risk:
Don’t rely solely on video
If your request involves money, sensitive data, or urgent approval, please verify it through a second channel.
Call the person directly at a known number. Send follow-up messages through your internal systems. Build a culture where double-checking is the norm, not a chore.
Tighten payment and approval processes
Companies should avoid single-person approval for large-scale moves. Introduce multiple levels of validation to financial decisions. Informal “quick approvals” over video calls are now a weakness.
Update your identity verification system
Traditional liveness checks may not be sufficient. Organizations should check whether they have a verification system in place to not only check for movement on camera, but also to detect synthetic media.
Be careful with facial data
Think twice before uploading your face to a new AI avatar platform. Understand what data is collected and how it is stored. Please note that you can change your password. I can’t do it to your face.
Limit public exposure as much as possible
High-resolution headshots and detailed public profiles make it easy to impersonate. You don’t have to disappear from the internet, but be aware of how much information is openly accessible.
Train your team to spot red flags
Unusual urgency. Slight audio delay. Minor visual glitch. Behaviors that feel “off.” Encourage employees to trust their instincts and escalate concerns.
How to spot a deepfake in 2026
Although AI has become incredibly realistic, it still suffers from the physical “cost” of rendering human biology in real time. If you think the caller is not genuine, use these three “liveness tests”:
“Profile” test
Most AI avatar models are trained on frontal data (LinkedIn photos, social media videos).
-
action: Ask your partner to turn their head 90 degrees to the side.
-
What to look for: Pay attention to the jawline and ears. In deepfakes, the “digital mask” often malfunctions, becomes blurry, or even comes off the neck when viewed from the side.
“Hand Occlusion” Test
Real-time AI has a hard time rendering two complex objects that interact, such as a hand moving in front of a face.
-
action: Slowly wave your hand in front of your face or ask someone to scratch your nose.
-
What to look for: The AI avatar frequently “flickers” and it looks like a hand is passing by. behind It’s not in front of the face pixel, it’s the face pixel.
“Light and Shadow” check
-
action: Ask the caller to move the light source on their phone or laptop or observe how their glasses react.
-
What to look for: Deepfakes often include “baked” lighting. If the shadow of the face remains exactly the same even when the lighting in the room changes, it is a composite image.
Rakshana is an entertainment and lifestyle journalist with over 10 years of experience. She covers a wide range of stories from community and health to mental health and inspirational features. A passionate K-Pop enthusiast, she also enjoys exploring the cultural influence of music and fandom through her writing.
