How a new wave of deepfake cybercrime targets businesses

AI Video & Visuals


As deepfake attacks against businesses dominate the headlines, detection experts are gleaning valuable insights into how these attacks occur and the vulnerabilities they exploit.

From 2023 to 2024, frequent phishing and social engineering attacks resulted in account hijacking, asset and data theft, identity theft, and reputational damage to companies across various industries.

Call centers at major banks and financial institutions are currently being overwhelmed by an onslaught of deepfake calls using voice cloning technology that attempt to infiltrate customer accounts and initiate fraudulent transactions. Internal help desks and staff have similarly been inundated with social engineering campaigns via phone calls and messages that are often successful, as in the case of the attack on the company's internal software development company Retool, which has resulted in a number of attacks on the company's customers. It resulted in a loss of 10 million dollars. A financial employee was tricked into transferring funds to a fraudster. Speaker-based authentication systems are now being refined and circumvented by deepfake audio.

The barrier to entry for bad actors is lower than ever before. The tools that enable the creation of deepfakes are cheaper and more accessible than ever, giving users without the technical know-how the opportunity to design sophisticated AI-powered fraud campaigns.

Given the increasing proliferation and modus operandi of cybercriminals, real-time detection that leverages and captures AI becomes essential to protect a company's financial and reputational interests.

Deepfakes that transcend modalities

Deepfakes are synthetic media (images, video, audio, or text) that appear real, but are created or manipulated with a generative AI model.

deepfake audio Refers to synthetically generated sounds that are created or modified using deep learning models. The common techniques behind deepfake audio are: voice clone, contains fake speech created using less than one minute audio samples of real people. Voice cloning is of particular concern in industries that use voice biometrics to access customer accounts. Companies that receive a high volume of calls as part of their business are reporting continued deepfake attacks on their infrastructure via voice cloning efforts.

creation of deepfake video It typically involves training a deep neural network on a large dataset of videos or images featuring individuals of interest. The model learns their facial features, expressions, and mannerisms, allowing it to generate new video content that looks authentic. Cybercriminals are using deepfake videos for a variety of purposes, including impersonating business owners, bypassing biometric authentication, and creating false advertising. Deepfake images, on the other hand, can be used to alter documents and circumvent efforts by know-your-customer (KYC) and anti-money laundering (AML) teams to curb the creation of accounts under false identities.

deep fake text Refers to artificially generated content that is intended to mimic the style, structure, and tone of human writing. These deepfake models are trained on large text datasets to learn patterns and relationships between words, and are taught to produce sentences that appear coherent and contextually related. Masu. These deepfakes aid cybercriminals in large-scale social engineering and phishing attacks by generating large volumes of convincing text, and are equally useful for document forgery.

The impact of deepfakes across industries

Audio deepfakes are one of the biggest risk factors for modern businesses, especially financial institutions. Bank call centers are increasingly inundated with deepfake voice-clone calls attempting to access customer accounts, and fraudsters submit AI-altered documents to open fake accounts. The fraud involved is the biggest security concern for most banks. Financial actors are being manipulated to move tens of millions of people with deepfake meetings that replicate CEOs’ voices and likenesses. After the Retool phishing attack, only one of his company's cryptocurrency customers lost his $15 million in assets.

However, the damage caused by deepfake cybercrime goes far beyond voice cloning and can impact every industry. Insurance companies are facing huge losses after fraudsters submitted deepfake evidence of fraudulent claims. Competitors may create fake customer testimonials and perhaps deepfake videos and images of defective products to damage your brand. The average cost to create a deepfake is $1.33, but the global cost of deepfake fraud in 2024 is expected to be $1 trillion. Deepfakes are a threat to markets and the economy as a whole. A deepfake of the Pentagon explosion caused panic in the stock market before authorities could refute it. More sophisticated attacks can easily cause huge losses in corporate value and damage to the global economy.

For media companies, the reputational damage caused by deepfakes can quickly lead to lost viewers and advertising revenue. At a time when viewers are already skeptical of all the content they see, deepfakes raise the stakes for accurate reporting and fact-checking. If audiovisual media that serves as the basis or evidence for a news report turns out to be an unverified, unlabeled deepfake, the damage done to the newsroom and its relationship with its audience will be irreparable. It may become.

Social media platforms are equally vulnerable. Especially since social media platforms have become the primary source of news for the majority of Americans. A malicious attacker costs just 7 cents to deliver a weaponized deepfake to her 100,000 social media users. Allowing the unchecked spread of AI-manipulated news stories could lead to severe viewer and advertiser losses and shareholder anxiety, not to mention the corrosive effects on society as a whole.

Deepfake disinformation campaigns can impact election integrity and cause civil unrest and chaos within government agencies. Such fears can disrupt markets, weaken the economy, and undermine trust between voters and electoral systems. More than 40,000 voters in New Hampshire were affected by deepfake robocalls against Biden. However, these movements are not limited to elections. State-sponsored actors may create composite videos of leaders making false claims to damage diplomatic and trade relations, incite conflict, and manipulate stock prices. The World Economic Forum's 2024 Global Risks Report ranks AI-powered disinformation as the biggest threat facing the world over the next two years.

Explore AI cybersecurity solutions

Deepfake detection solution

How can organizations combat this pressing threat? It all comes down to detection.

The ability to detect AI-generated audio, video, images, and text accurately, quickly, and at scale can help organizations stay ahead of attackers who use deepfakes to conduct fraud and disinformation campaigns. Helpful.

Anyone working to secure call centers, customer-facing teams, or internal help desks will want a solution that can detect AI-generated voices in real-time. These points of contact are highly vulnerable and susceptible to fraud, so real-time audio deepfake detection fits well into existing voice or biometric platform workflows and re-trains employees on a completely new technology stack. It should be possible for businesses to seamlessly integrate without training.

One in six banks struggles to identify customers at any stage of the customer journey, and financial leaders cite customer onboarding as the workflow process most vulnerable to fraud. . Text and image detection capabilities are a powerful deterrent against forged documents, identity theft, and phishing attempts. A comprehensive deepfake detection toolset should power KYC and anti-fraud teams' onboarding and re-authentication flows and protect against presentation and injection attacks.

Journalists should feel empowered to report the news with confidence that their sources are authentic. Image, video, and text detection models help reporters avoid considering fake evidence in legitimate reports. 53% of Americans get their news from social media. A well-equipped detection solution can help content management teams who cannot rely on validating the onslaught of content at scale to protect social media platforms from becoming unwitting channels for fake content.

Sophisticated audio deepfake detection tools are built to alert you to the latest popular tool of political manipulation: misleading robocalls using voice clones of political candidates. State-sponsored attackers can now easily impersonate heads of state and other political figures. Today's detection solutions can catch synthetic spoofers at critical moments and reliably alert the public. Text detection helps government agencies catch harmful AI-generated documents and communications and prevent identity and fraud before they impact the lives and livelihoods of citizens.

Reality Defender is one solution that detects and protects against advanced deepfakes in any media. Platform-independent APIs allow organizations to upload a firehose of content and use a multi-model approach to examine every uploaded file from multiple angles, with the latest deepfake creation models in mind. , you can expand your discovery capabilities on demand. This creates a more complete and robust outcome score that reflects the potential for AI manipulation. Using multiple models across multiple modalities, organizations can take informed, data-driven next steps to protect their clients, assets, and reputations from today's and tomorrow's complex deepfake attacks. can.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *