AI tools and synthetic IDs will disrupt KYC programs in 2025

Machine Learning


AI-based attacks, anti-money laundering (AML), artificial intelligence and machine learning

AI-generated documents and personas reveal weaknesses in traditional identity verification

Suparna Goswami (Gushparna) •
January 5, 2026

AI tools and synthetic IDs disrupt KYC programs
Image: Shutterstock

In 2025, artificial intelligence-generated documents and deepfake tools disrupt know-your-customer programs as synthetic identity fraud becomes a mainstay of financial crime. Fake IDs are not new, but AI technology has become highly effective in helping fraudsters empty bank accounts.

See also: AI Browser: A new Trojan?

U.S. lenders face $3.3 billion in exposure to questionable synthetic identities across auto loans, credit cards, and personal loans, according to TransUnion's analysis of first-half 2025 data.

Synthetic identity fraud has been around for years, but fraud investigators say the industry has reached a tipping point. AI allows fraudsters to create convincing synthetic personas in minutes, collapsing the barrier to creating fake identities that pass verification. For leaders in fraud and payment security, the implications are clear. Identity verification built on static attributes can no longer keep up with modern fraud. The data points that institutions collect during onboarding (name, date of birth, government ID) have become a liability, not a safeguard.

Nisan Bangiev, director and head of fraud risk at Valley Bank, said the industry needs to rethink the way it looks at identity. “In the past, the way we thought about identity was very static. You had your name, your date of birth, your Social Security number, and that was your identity. But now that doesn't work,” Bangiev said.

Bank fraud teams are now facing the troubling reality that passing KYC does not guarantee legitimacy. This could simply indicate that the scammer got the correct data point. “We need to understand how we create digital identities, how we validate them over time, and how we maintain trust throughout the customer journey,” he said.

The ease with which synthetic identities can be created has reached astonishing levels. Steve Lenderman, head of fraud prevention at isolved, proved this by building his own synthetic ID within minutes.

“It took us about seven minutes to build a fully functional identity that could pass basic validation checks. The barrier to entry has never been lower,” says Lenderman.

The scope of synthetic identity fraud has also expanded far beyond its traditional credit card origins. “Synthetic identities are not new,” Lenderman said. “What’s new is the scale and where they’re popping up. We’re no longer just seeing them in credit, we’re seeing them in insurance, government, healthcare and crypto.”

Once onboarded, the fabricated ID blends seamlessly into legitimate customer traffic. These can remain dormant for months before being activated for fraud, making them much more difficult to detect.

This creates a gap between onboarding and ongoing risk. Fraud may not surface until weeks or months after the account is approved, long after the initial KYC check has been completed. By the time suspicious patterns surface, significant losses may have already occurred.

As AI-generated photos are increasingly used to create lifelike IDs, traditional KYC is reduced to a checkbox item for banks. Even generation tools can create forged documents that appear legitimate, such as government IDs, utility bills, and bank statements. At the same time, large-scale language models help fraudsters fabricate detailed personal histories such as employment records, addresses, and financial behavior, giving synthetic IDs the depth and consistency expected from traditional KYC processes.

Growing fraud and KYC failures have triggered aggressive regulatory measures around the world in 2025. According to Fenergo analysis, global regulators imposed around 139 financial fines in the first half of this year, totaling $1.23 billion, an increase of 417% compared to the same period in 2024.

For fraud and payment security leaders, one message is becoming inescapable: one-time KYC is no longer enough. As synthetic identities become more sophisticated and have longer lifetimes, identity verification must move from static checkpoints to a continuous process.

“Fraud detection has to be real-time,” said Idan Bar-Dov, CEO of real-time web intelligence company Heka Global. “If you rely on checks that are done only once during onboarding, you miss out on what happens afterward, and that’s where fraud actually thrives.” This gap is where synthetic identities thrive, quietly aging in the system until they’re ready to be exploited.

Identity today is about the context and intent of behavior over time. How a person types or holds their phone can tell you a lot about whether it's legitimate or not, and behavioral signals are being shown as a way to continuously verify identity without adding friction.

The emerging consensus among practitioners is clear. KYC is still essential, but it's only a starting point.

“KYC is still important, but it cannot be the end of the story. Risks do not stop after opening an account, so you need to take the time to verify your identity,” Bangiev said.





Source link