The use of generative AI and open source tools has made it easier for hackers to create deepfakes, or voice clones, that mimic the appearance and voice of another person. As explained in a report by ID R&D providing guidance to developers on video injection attacks, the complexity of carrying out such fraud has decreased significantly, as have the costs and expertise required.
While creating deepfake videos for social media is relatively easy from an attacker's perspective, real-time video injection attacks require more advanced technology and sophisticated delivery mechanisms.
In one recent incident, British engineering group Arup reportedly lost around $25 million after fraudsters used an AI-generated deepfake to impersonate the group's CFO during a video conference call.
Such video injection attacks are becoming increasingly common, particularly within know-your-customer (KYC) systems that compare biometric data, such as video frames of a person's face, with identity documents.
Recent Biometric Updates During the webinar, ID R&D President Alexey Khitrov explained how freely accessible software can trick individuals into believing someone is posing as another. A survey of attendees revealed that the majority of organizations have either already encountered injection attacks or deepfakes or expect to encounter them in the near future.
How does a video injection attack work?
In a video injection attack, a hacker manipulates or forges a digital video stream and inserts it into a communication channel to fool biometric systems or human operators. These attacks involve digital manipulation techniques such as 3D rendering, face morphing, face swapping, and deepfakes.
Video injection vectors are often used to fool remote facial recognition systems in scenarios such as onboarding and KYC processes. These methods can be used in a variety of remote onboarding scenarios, such as when an individual opens a bank account using a smartphone, laptop, or PC.
These attacks can be carried out through a variety of means, including exploiting vulnerabilities in hardware, software, network protocols, client-server interactions, or by manipulating virtual environments or external devices.
According to a guidance paper from ID R&D, popular methods for video manipulation include virtual camera software (such as ManyCam), hardware video sticks, JavaScript injection, smartphone emulators, and network traffic interception. There are also more advanced techniques, such as hardware injection, which require a high level of expertise to implement.
However, organizations are encouraged to adhere to specific certifications and standards from the ISO/IEC 27000 family for KYC system development. Although these standards do not specifically address video injection attacks, they contribute to the robustness of the overall infrastructure.
How can we prevent video injection attacks?
The ID R&D report noted that while many KYC systems can identify standard presentation attacks, there are certain vulnerabilities that are not covered by current standards.
Common methods for combating video injection attacks include encrypting and securely transmitting the video feed to prevent tampering, and continuous authentication such as biometric checks to ensure the ongoing validity of the video feed.
Many remote onboarding and KYC software are integrating AI-based anomaly detection and active liveness detection, where the software analyzes the real-time movements of users to verify their authenticity.
Additional strategies include digital watermarks to trace the original source of the video and multi-factor authentication to provide an extra layer of security.
The European Association of Biometrics (EAB) is set to publish the rTS 18099 standard in October this year, which aims to address the integration of biometric data between the data capture and signal processing components of biometric systems used for remote identity proofing.
Article Topics
Biometrics | Deepfake Detection | Deepfakes | ID R&D | Injection Attacks