Generative AI is increasing our ability to create deepfakes that sound and look realistic. As a result, some of the more sophisticated spoofers take their social engineering attacks to a more nefarious level.
In the early stages of development, deepfake AI was able to generate generics. representative of a person. Recently, deepfakes have used synthetic voices and videos of specific individuals to launch cyberattacks, create fake news, and damage reputations.
How AI deepfake technology works
Deepfakes use deep learning techniques such as generative adversarial networks to digitally alter and simulate real people. Malicious examples include mimicking a manager’s instructions to employees, generating fake messages to distressed family members, and accidentally distributing embarrassing photos of individuals.
Such cases are becoming more common as deepfakes become more realistic and harder to detect. And they’re easier to generate, thanks to improved tools created for legitimate purposes. Microsoft, for example, rolled out a new language translation service that mimics the human voice in another language. A major concern, however, is that these tools make it easier for perpetrators to disrupt operations.
Fortunately, tools to detect deepfakes have also improved. Deepfake detectors can search for telltale biometric signatures in videos, such as human heartbeats or voices produced by human vocal organs rather than synthesizers. Ironically, the tools currently used to train and improve these detectors may eventually also be used to train the next generation of deepfakes.
In the meantime, companies are moving from simple employee training to spot the signs of these attacks to more sophisticated authentication and security tools and procedures to prepare for the rise and sophistication of deepfake attacks. There are several steps you can take to get there.
According to Robert Scalise, global managing partner of risk and cyber strategy at consulting firm TCS, deepfake attacks can be grouped into four general categories.
- Misinformation, disinformation, misinformation.
- Infringement of Intellectual Property Rights.
- Defamation.
- Porn.
Examples of deepfake attacks
According to Oded Vanunu, head of product vulnerability research at IT security provider Check Point Software Technologies, the first serious deepfake attack occurred in 2019. Hackers disguised a phone call request from his CEO, resulting in a bank transfer of $243,000. The incident forced financial institutions to be more vigilant and take greater precautions, but hackers have become more sophisticated.
In 2021, criminals tricked bank managers into transferring a whopping $35 million to fraudulent bank accounts. Gregory Hatcher, founder of cybersecurity consultancy White Knight Labs, said: “Criminals timed their attack perfectly and the bank manager transferred the funds.”
The latest generation of bots uses deepfake technology to evade detection, said Sam Crowther, founder and CEO of bot protection and mitigation software provider Kasada. “When deepfakes are combined with bots, they pose a growing threat to our social, business and political systems,” he explained. “The latest advances in AI and malicious automation have made deepfakes more real and accessible, spreading disinformation on a previously unimaginable scale. uses bots to create fake accounts, share deepfake videos, and spread disinformation through social media platforms.
“Deepfake attacks are no longer a mythical threat. “That means teaching staff what to look out for and generally increasing their education on its prevalence.”
Best practices for detecting deepfake technology
Once upon a time, most online video and audio presentations were accepted as authentic. not anymore. Today, deepfake detection can be a combination of art and science.
According to Jendruszak, it is possible for humans to detect irregular vocal rhythms and unrealistic shadows around the eyes of an AI-generated person. “Above all else,” he added. [of deepfakes] If something still feels wrong, it’s because the process still has an error. ”
There are some tell-tale signs that humans can look for in distinguishing between real and fake images.
- Inconsistencies in skin or body parts.
- shadows around the eyes.
- Unusual blinking pattern.
- Abnormal glare in eyeglasses.
- Unrealistic mouth movements.
- The lip color is unnatural compared to the face.
- Facial hair that is incompatible with the face.
- Unreal mole on face.
Sandy Fryderman, president, CTO and founder of industry fintech, a financial infrastructure services provider, found it easy to detect fakes in past videos. “[B]The technology is much better now, and many of these old ‘tells’ are now gone.” Today, as a lighting and shading anomaly that deepfake technology has yet to perfect, telltale signs can be seen. There is a nature.
Vanunu suggests that using forensic analysis could help, by examining the metadata of video and audio files to determine manipulation or tampering. He can also use specialized software for reverse image searches to discover visually similar images used in other contexts. Additionally, companies are increasingly using 3D synthetic data to develop more sophisticated facial recognition models that use 3D, multi-sensor, and dynamic facial data for liveness detection, he said. said Yashar Behzadi, CEO and founder of platform provider Synthesis AI.
For audio deepfakes, Hatcher recommends telltale signs in choppy sentences, odd word choices, and unusual speaker inflections or tones.
New standards bodies such as the Coalition for Content Provenance and Authenticity (C2PA) are creating technical standards for verifying the source, history, and provenance of content. C2PA facilitates industry collaborations with companies such as Adobe, Arm, Intel, Microsoft and Truepic. Adobe and Microsoft are also working on Content Credentials to help verify the authenticity of images and videos.
How to create strong security procedures
Preventing damage from deepfake attacks should be part of your organization’s security strategy. Businesses should consult cybersecurity and infrastructure security agencies for procedures such as the Zero Trust maturity model for mitigating deepfake attacks.
Companies can also do the following to prevent spoofing attacks:
- Develop a multi-step authentication process that includes verbal and internal approval systems.
- Reverse-engineer how hackers use deepfakes to infiltrate security systems and alter processes.
- Establish policies and procedures based on industry norms and emerging standards.
- Stay abreast of the latest tools and technologies to thwart increasingly sophisticated deepfakes.
The future of deepfake attacks
A natural extension of ransomware-as-a-service will be deepfake-as-a-service, based on neural network technology that allows anyone and everyone to create videos. “In the face of the potential for such a spread,” warned Scalis. engagement. “
Deepfake attacks are evolving in step with new technologies designed to detect them. “The future of deepfake attacks is difficult to predict, but advances in AI are likely to make them more prevalent and sophisticated,” he reasoned Vanunu. “As the technology behind deepfakes continues to improve, it will become easier for attackers to create convincing deepfakes and make them more difficult to detect.”