How generative AI makes scammers incredibly convincing
Taylor Swift presents cooking utensils. Schilling for 'Tom Hanks' dental plan. “Grandchildren” seeking emergency funds. The rise of generative AI has been a boon for many people, but unfortunately, that includes scammers and hackers. Now, people are going public with their losses, celebrities are speaking out about digital impersonation, and government agencies are considering tougher measures to combat the growing security risks posed by AI.
Video, fake conversation
Of course, celebrity video deepfakes have attracted media attention. Very recently, Taylor Swift's AI clone is said to be giving away Le Creuset cookware — Attempts to collect data on unknown parties by asking customers to complete surveys. Le Creuset issued an apology and a warning before removing the ad.Tom Hanks had to speak out on this topic Last October, When a fake dental plan used AI to create a video of his shadow.
The problem goes beyond consumers and celebrities. During February, Hong Kong bank suffered a loss of $25 million When scammers faked video calls with employees, including the chief financial officer. AI is adding a whole new dimension to phishing. Many people are wary of suspicious phrases or email addresses, but few think to question the evidence they see with their own eyes.
audio, text, images
AI fraud No need to include video. Voice cloning allows people to believe that requests for money or other assistance are coming from individuals they know, or that the person contacting them is actually coming from a government agency with the authority to request personal information. There is a possibility that AI-generated voices can also bypass security measures, giving hackers access to accounts to access data, reroute direct deposits, and profit at the expense of defrauded users. I can.
Scammers on dating sites are starting to use AI to improve their catfishing schemes, using convincing images, text messages, and more to lure people into “relationships” with the goal of extracting money or personal information. Creating a video. Other companies are leveraging this technology to create far more fake job postings than ever before, and the data people submit when applying to “recruitment” services or requesting payment. Collect.
Classic phishing emails are also AI-enhanced, as generative AI can generate text free of spelling and grammatical errors that indicate a human-authored attempt to gain access.
Scope and solutions
While the actual number of AI-based social media scams tracked by the FTC remains relatively small, it has increased sevenfold between February 2023 and 2024, and experts believe that customers are They say many scams don't rise to the level of federal charges. Instead. In one case, someone lost his $7,000 to a fake Elon Musk ad. fake Many may find it ironic that Elon Musk would deceive people, but this illustrates the scope of the problem.
The FTC is starting to embrace AI in its anti-fraud efforts. new contest It aims to help people recognize vocal clothing. Ability to sue entities impersonating government agencies and proposed rules that would make companies liable if: Their AI tools are used In deepfake fraud. But even if the proposal becomes effective, it probably won't eliminate AI-based fraud. Businesses and consumers alike need to remain vigilant and double-check anything that looks suspicious, even if it's ostensibly from her grandson or Taylor Swift.
