Artificial intelligence used to be very easy to spot, but AI fraud and deepfake videos are becoming more convincing.
In just a few years, artificial intelligence (AI) has exploded, going from futuristic fantasy to the backbone of the internet. Social media feeds are now filled with AI-generated content, from weirdly satisfying animations to viral “chubby” AI cats. But behind the novelty lies a dark side to this technology.
This week, Grok, the AI chatbot built into social media platform X, came under fire following reports that: Created sexualized images of women and children At user’s request. Following national and political outcry, Ofcom has launched a formal investigation We filed a lawsuit against X over the creation of deepfake sexual images using AI tools.
Britain’s media regulator has warned that Grok may be allowing “abuse of intimate images” and even “child sexual abuse material”. Industry experts say the controversy means the tide could be turning against AI, with skepticism growing and politicians pushing for tighter regulation.
However, it is clear that not all uses of AI are benign, as this technology already exists and its capabilities continue to evolve. Beyond sexual deepfakes, AI fraud is rapidly escalating. Fraudsters are now using hyper-realistic fake audio and video to impersonate loved ones, clone voices and scam victims out of thousands of pounds.
As technology becomes more sophisticated and accessible, how can you tell what’s real and what’s generated by AI? What red flags should you look out for when it comes to AI fraud?
Early deepfakes had too many fingers, distorting the background and making faces move unnaturally. But today, those obvious signs have almost disappeared. Dr Jonathan Aitken, a senior lecturer in robotics at the University of Sheffield, explains that deepfakes are built on a deceptively simple framework.
“The technology is built around a system block that generates some samples, looks at what comes in, and says, ‘Do you think this is real?'” he says. manchester evening news. Its system, known as a discriminator, is trained on real images and videos.
Next we have the second block, the generator. “You have another block and you just stuff things into it until you find something that disrupts the first block,” explains Dr. Aitken. “If you can fool the identifier, you can quickly create a deepfake.”
Not all AI-generated content is deceptive, but the line between playful and harmful is thin. “What we are reaching is the end of a more dangerous realm… AI-generated content, deepfake content that is sexual, political or harmful,” he added.
These videos often reuse real footage and use human voices and mannerisms, but add new sounds and movements sourced from elsewhere. “AI is not genius,” Dr. Aitken says. “Something doesn’t suddenly come out of thin air. That video has a context. It’s sourced from another video.”
This is especially concerning when deepfakes show politicians making controversial statements. Dr Aitken warns that the risk is “what becomes true and what becomes false”.
context is key
Dr Aitken says that as deepfakes improve, the visual errors we relied on are a thing of the past. “In the early days of AI-generated images, the number of fingers was wrong, which was correct, but not anatomically correct,” he explains. However, the model has been improved. Now, you may not be able to pinpoint exactly what the problem is. It’s just a feeling that something isn’t right.
So context is key. “Determining whether an image is from one of these tools requires judgment in context,” explains Dr. Aitken. If you’re presented with an AI video or photo that looks correct, it means the identifier thinks it’s real, so you’re likely to think so too, he says.
That’s where human identifiers come into play. Dr Aitken says internet users must ask the question: “Is this real?” Do the facts match what I can confirm through independent sources?”
He compares our new digital reality to the early days of GPS. Back then, people blindly followed their navigation systems, even if they were being guided down a one-way street. “We have tools that show us something, but it can be complete nonsense. We have to identify ourselves,” he explains.
How fraudsters are exploiting AI
Cybersecurity and digital forensics expert and IET Fellow Dr Junaid Ali says fraudsters are already exploiting AI and doing it successfully. He explains that scammers are using “face swapping” to impersonate other people on video calls, replicating a loved one’s voice for a fake emergency call, and even faster generating fraudulent emails.
The good news is that there are ways to spot AI fraud and protect yourself. Dr Richard Whittle, an expert on AI and dark patterns at the University of Salford, said: shared his advice online. First, he says, if it sounds too good to be true, it’s likely a scam.
“Scammers are bombarding you with get-rich-quick offers, capitalizing on people’s fears of missing out on the AI boom in investing,” he wrote. “As always, if it seems too good to be true, it’s most likely a scam, especially if you are asked to make a deposit immediately to secure your ‘opportunity’ or if you are asked to pay by an unusual method, such as cryptocurrencies.”
To reduce your risk of falling victim to voice cloning scams, Dr. Whittle recommends establishing safe words and phrases with people close to you.
“These calls typically ask for immediate help and to send money. Scammers prey on vulnerability and anxiety, sometimes in the middle of the night when the voice of a loved one calls and explains that they have been in an accident, that they are borrowing the phone, or that they urgently need cash transferred to their account to pay for medical treatment,” he explains. “These scams are becoming increasingly common, and you may want to consider establishing a safe word or phrase with your family to verify the caller’s identity.”
If you find a video online of a celebrity who appears to be endorsing various investment scams, there’s a good chance this is a deepfake video. Despite advances in technology, these videos can contain snitching, he says.
“Always be on the lookout for lip-sync mistakes, unusual facial expressions, and unusual backgrounds. The subject may be unusually stationary and look stiff and monotonous. The subject may blink too much or too few times, and if they wear glasses, they may not look right. Perhaps there is too much (or too little) glare,” he explains. “AI also often struggles with facial hair, so beards and mustaches may not look completely natural.”
However, Dr Aitken emphasizes that the real solution to the deepfake scourge is not better detection tools, but better design. He explains: “When we develop hardware or technology, one of the questions we ask as responsible researchers is, ‘Is it possible to use, abuse, or exploit this? And if so, what do we do about it as part of the design and development process?’
But with AI tools like Grok, he says, these questions might not have been asked. And if deepfakes exist, he says, the damage has already been done. So while technology evolves, our instincts also need to evolve and humans need to identify themselves.

