Blocks forming a robot on a white background.
Yuichiro Chino | Moment | Getty Images
The fraudulent tactics used by fraudsters include using AI to generate primary account numbers and then continuously testing them, Visa's Mirfin said. PAN is a card identifier, usually 16 digits but sometimes up to 19, that is found on payment cards.
Criminals use AI bots to repeatedly attempt to submit online transactions using combinations of primary account numbers, card verification values (CVVs) and expiration dates until they receive an approval response.
According to Visa, this technique, known as an enumeration attack, leads to $1.1 billion in annual fraud losses, making up a significant portion of total global losses from fraud.
“We look at over 500 different attributes. [each] “It scores the transactions and creates a score, and that's the AI model that actually does that. We process about 300 billion transactions a year,” Mirfin told CNBC.
Each transaction is assigned a real-time risk score to help detect and prevent enumeration attacks in transactions where purchases are processed remotely, without a physical card, via a card reader or terminal.
“All of them [transactions] “It's being handled by AI. It's looking at a variety of attributes and evaluating every transaction,” Mirfin said.
“If we see a new type of fraud happening, our models will recognize it, capture it, rate those transactions as high risk, and customers can decide not to authorize those transactions.”
Visa also uses AI to assess token provisioning requests for potential fraud, to combat fraudsters who use social engineering and other deception to illegally provision tokens and execute fraudulent transactions.
Over the past five years, the company has invested $10 billion in technology to help reduce fraud and strengthen network security.
Cybercriminals are using generative AI and other emerging technologies, such as voice cloning and deepfakes, to trick people. Milfin warned.
“Romance scams, investment scams, pig slaughter — they're all using AI,” he said..
Pig slaughter refers to a fraud method in which criminals build relationships with victims and convince them to put funds into fake cryptocurrency trading and investment platforms.
“When you think about what they're doing, it's not just criminals sitting in a marketplace picking up a phone and calling somebody. They're using some level of artificial intelligence, whether it's voice cloning, deepfakes, social engineering. They're using artificial intelligence to carry out different types of activities,” Mirfin said.
Generative AI tools such as ChatGPT allow scammers to create more convincing phishing messages to trick people.
According to US-based identity and access management company Okta, cybercriminals using generative AI only need less than three seconds of audio to replicate a voice, which can be used to trick family members into thinking a loved one is in trouble or trick bank officials into transferring funds from a victim's account.
Okta says its generative AI tools are also being used to create deepfakes of celebrities to fool their fans.
“Through the use of generative AI and other emerging technologies, fraud is more persuasive than ever, resulting in unprecedented losses to consumers,” Visa chief risk and customer service officer Paul Favara said in the company's semi-annual threat report.
In a report, Deloitte's Center for Financial Services said cybercriminals using generative AI to commit fraud can do so much more cheaply by targeting multiple victims at once using the same or fewer resources.
“Such incidents are likely to surge in the coming years as bad actors find and deploy increasingly sophisticated and cheap generative AI to defraud banks and their customers,” the report said, estimating that generative AI could increase fraud losses in the U.S. from $12.3 billion in 2023 to $40 billion by 2027.
Earlier this year, an employee of a Hong Kong-based company transferred $25 million to fraudsters who had instructed the chief financial officer to transfer the money using a deep fake.
Chinese state media reported a similar incident in Shanxi province this year, in which an employee was tricked into transferring 1.86 million yuan ($262,000) to a scammer who used a deepfake of his boss over a video call.
