Karl Froggett spent more than 20 years as one of Citibank's chief information security officers, protecting the bank's infrastructure from increasingly sophisticated cyberattacks. And while criminal tricks have long plagued the banking and business world, from low-tech paper forgeries to rudimentary email fraud, deepfake technology powered by generative AI is unlike anything we've seen before. It's something without.
“I'm very concerned about deepfakes being used for business,” Froggett said. Froggett currently serves as chief information officer (CIO) for Deep Instinct, a company that uses AI to fight cybercrime.
Industry experts say boardrooms and office cubicles are becoming battlegrounds where cybercriminals routinely deploy deepfake technology in an attempt to steal millions of dollars from companies, and as a result, are losing ground before they succeed. It has become a perfect testing ground for efforts to spot AI fraudsters. By fraud.
“The challenge we face is that generative AI is very real,” Froggett says.
Generative AI video and audio tools are being introduced and rapidly improving. In February, OpenAI released a video generation tool called Sora, and at the end of March it introduced an audio tool called Voice Engine that can realistically recreate a person speaking from a 15-second soundbite. OpenAI said it made Voice Engine available to a small number of users because of the risks posed by the technology.
Mr. Froggett, who is from England, cites regional British accents as an example.
“I use nuances and words that you've never heard before, but the generative AI consumes what I publish. I'm sure there's a speech I gave posted somewhere. From there. It generates surreal voicemails, emails and videos,” he said.
Experts cited a widely publicized incident in Hong Kong last year in which an employee at a multinational company was tricked into committing a cybercrime after joining a Zoom call with colleagues, including the company's chief financial officer (CFO). It cites that he was forced to transfer $25 million to a fake account run by someone else. My colleagues were creating convincing deepfakes. Experts believe this incident is a sign of things to come.
Despite OpenAI restricting access to its audio and video tools, the number of dark web sites selling GPT copy products has exploded in the past few months. “The bad guys literally just got these tools. … They're just getting started,” Froggett said.
Rupal Hollenbeck, president of Check Point Software, says a snippet of someone's conversation of less than 30 seconds is all it takes to create a perfect deepfake, and cybercriminals can now make it for a few dollars, if not a penny. We have access to AI-driven deepfake tools. “But that's only on the audio side. The same goes for video, and that changes things,” Hollenbeck said.
The steps companies are taking to thwart the success of deepfakes have implications for how all individuals should live their lives and interact with friends, family, and colleagues in the world of the AI generation.
How to identify AI video scammers
There are many ways to spot an AI scammer, some of which are relatively simple.
First, Hollenbeck says if you have any doubts about the veracity of someone's video, ask them to turn their head to the right or left or turn their backs. Hollenbeck said if the caller responds but his head no longer appears on the video screen, he should end the call immediately.
“Now I'm going to teach this to everyone I know and make them look left or right. AI doesn't have the ability to go beyond what you can see. AI today is flat, and that's very powerful. ” she said. she said.
But we don't know how long that will last.
Chris Pearson, CEO of Blackcloak, a firm specializing in digital executive protection, believes it's only a matter of time before deepfakes come with 3D capabilities. “Models are improving so rapidly that these tricks will be sidelined,” Pearson said.
It also says don't be afraid to ask for video evidence of the authenticity of old-fashioned “living proof,” such as asking conference attendees to see company reports or newspapers. . If you can't follow these basic commands, that's a red flag.
How can using codewords and QR codes help?
Old-fashioned code words are also effective, but should be sent through another medium and stored only in unwritten form. Hollenbeck and Pearson recommend that company executives generate a codeword each month and store it in an encrypted password vault. If you're in doubt about who you're talking to, you can ask them to text you the codeword. Then set the threshold for expanding the codeword. For example, if someone asks you to make a transaction for her over $100,000, you should implement codeword tactics.
For businesses, conducting internal calls only through approved corporate channels also greatly reduces the risk of being fooled by deepfakes.
“What we're having trouble with is going outside the network,” Pearson said.
Nirupam Roy, an assistant professor of computer science at the University of Maryland, said instances of business deepfakes are on the rise, and it's not just criminal bank transfers. “It's not hard to imagine how deepfakes like this could be used for targeted defamation to damage the reputation of a product or company,” he said.
Roy and his team have developed a system called TalkLock that works to identify both deepfakes and shallowfakes. This, he explains, “focuses on connecting partial truths to small lies without relying on complex editing techniques.”
This may not be the answer to fraud generated by highly personalized AI, but individuals (those with access to the app) and businesses (those given access to the verification module) may It is designed to identify. It works by embedding QR codes in audiovisual media such as live public appearances by politicians and celebrities, social media posts, advertisements, and news, which can prove their authenticity, he says. . This addresses the growing problem associated with informal recordings. For example, video and audio captured by audience members at an event cannot be identified by metadata, unlike official media.
How to practice multi-factor authentication offline
Even with more protection technologies, experts predict the arms race of deepfakes versus deepfake tools will spiral. For companies, there are certain steps that can be put in place to prevent the worst consequences of deepfakes, which are difficult to adapt to in our personal lives.
Eyal Benishti, CEO of email security software company Ironscales, said organizations are increasingly adopting segregation of duties to ensure that no one person can be tricked into harming the company. He said it would be. This particularly means a division of labor process for handling sensitive data and assets. For example, to change bank account information used for bill or payroll payments, he would need two people to make the change, and (ideally) a third person to notify him. “This way, if an employee falls into a social engineering attack asking for redirection of a bill payment, there is a stopgap because different stakeholders are brought in to play a role in the chain of command. will be born,” Benishti said.
At the most basic level, Hollenbeck says, organizations and their employees need to start living with a multi-factor authentication method that provides multiple ways to reality-test. At the end of the day, old-fashioned tactics still work, like walking down the hallway to meet your boss in person. Deepfakes are currently impossible.
“They used to say seeing is believing, but that's no longer the case,” Hollenbeck says.
It’s also wise to remember that deepfakes are just the latest in a series of scams that prey on human vulnerabilities by creating a false sense of urgency, from three-card montes to pigeon drops. So, according to Pearson, the best antidote to deepfakes may be the simplest thing: slowing them down. This is probably one of the tactics that individuals are more likely to use in their personal lives than employees are to use at work.
“Slowing down will almost always yield a definitive answer. Every company, if they feel they are forced to make a decision, should encourage their employees to decline, call security, and do no harm.” There should be a safe harbor policy that people should feel entitled to.'' Pearson said. Corporate cultures often have little respect for employees.
“We have to give people the advantage of being able to stop and say no. When they don't feel like they can say no, and no one feels like they can say no, that's when mistakes are made. It’s time for it to happen,” Pearson said.
