3D generated face representing artificial intelligence technology
Themotioncloud | Istock | Getty Images
In one of the biggest cases uncovered this year, financial workers in Hong Kong were tricked into transferring more than $25 million to fraudsters who used deepfake technology to pose as colleagues over video calls, authorities told local media in February.
Last week, British engineering firm Arup confirmed to CNBC that it was the company involved in the incident, but declined to provide further details because of the ongoing investigation.
David Fairman, chief information officer and chief security officer for Asia Pacific at Netskope, said such threats are growing as generative AI techniques rapidly become mainstream with the widespread release of Open AI's Chat GPT in 2022.
“The public availability of these services has lowered the barrier to entry for cybercriminals and they no longer require special technical skills,” Fairman said.
As AI technology continues to evolve, the number and sophistication of fraud will grow, he added.
A range of generative AI services can be used to generate human-like text, image and video content, potentially acting as a powerful tool for illegal actors seeking to digitally manipulate and recreate specific individuals.
An Arup spokesperson told CNBC: “Like many other businesses around the world, our operations are subject to regular attacks including invoice fraud, phishing scams, WhatsApp voice spoofing and deepfakes.”
The financial officer was reportedly asked to wire money during a video call with what appeared to be the company's chief financial officer and other staff members, but the other attendees in the meeting were actually digitally recreated deepfakes.
Arup confirmed that “fake audio and video” were used in the attack, adding that “the number and sophistication of these attacks has increased sharply in recent months.”
Chinese state media reported a similar incident in Shanxi province this year, in which a female financial employee was tricked into transferring 1.86 million yuan ($262,000) to a scammer's account after a video call with a deepfake of her boss.
In addition to direct attacks, companies are increasingly concerned about the possibility that deepfake photos, videos or speeches of superiors could be used in malicious ways, cybersecurity experts say.
Deepfakes of company executives could be used to spread fake news to manipulate stock prices, smear a company's brand or sales, or spread other harmful misinformation, according to Jason Hogg, a cybersecurity expert and executive in residence at Great Hill Partners.
“This is just scratching the surface,” said Hogg, a former FBI special agent.
He emphasized that generative AI can create deepfakes based on vast amounts of digital information, such as public content hosted on social media and other media platforms.
In 2022, Binance Chief Communications Officer Patrick Hillman claimed in a blog post that scammers had created a deepfake of him based on past news interviews and television appearances and used it to trick clients and contacts into attending meetings.
Netskope's Fairman said those risks have led some executives to start cleaning up or restricting their companies' online presence, fearing it could be used as an avenue by cybercriminals.
Deepfake technology is already widespread outside the corporate world.
From fake pornographic images to doctored videos advertising cooking equipment, celebrities like Taylor Swift have fallen victim to deepfake technology, and deepfakes of politicians are also rampant.
Meanwhile, some scammers are creating deepfakes of individuals' family and friends in an attempt to scam them out of money.
Hogg said the broader problem would accelerate and worsen for some time, because preventing cybercrime required careful analysis to develop systems, practices and controls to protect against new technologies.
But cybersecurity experts told CNBC that companies can bolster their defenses against AI-enabled threats through better staff training, cybersecurity testing, and requiring codewords and multiple levels of approval for all transactions — something that could have prevented cases like Arup's.
Clarification: This story has been updated to accurately reflect David Fairman's role at Netskope.
