Activists issued a stark warning last year after artificial intelligence was used to create thousands of child sex abuse videos, contributing to record levels of such harrowing content being found online.
The Internet Watch Foundation (IWF) revealed that analysts discovered 3,440 AI-generated videos depicting child sexual abuse in 2025. This is a dramatic increase from just 13 trees identified in 2024.
Overall, IWF staff processed 312,030 confirmed reports of abusive images found on the internet in 2025, up from 291,730 the previous year.
Their research found that of the 3,440 videos generated by AI, 2,230 were classified as Category A, the most extreme classification under UK law, with a further 1,020 deemed the second most stringent.
IWF chief executive Kelly Smith said: “When images and videos of children being sexually abused are distributed online, everyone’s safety is compromised, especially children.
“Our analysts are working tirelessly to remove this image to give hope to victims. But AI is now so advanced that criminals can essentially own their own child sexual abuse machines and create whatever they want to see.”
“The horrifying rise in AI-generated extreme Category A videos of child sexual abuse shows what criminals are willing to do. And it’s dangerous.
“The easy availability of this material will only embolden those with a sexual interest in children, furthering its commercialization and further endangering children both online and offline.
“Governments around the world must now ensure that AI companies incorporate safe design principles from the start. It is unacceptable that technology that allows criminals to create this content is exposed to the public.”
The study comes after Company X announced limits on its AI chatbot Grok’s ability to manipulate images, following an outcry over reports that users could instruct the AI chatbot Grok to sexualize images of women and children.
The company announced earlier this week that it would ban Grok from “editing images of people in revealing clothing” and block users from creating similar images of real people in countries where it is illegal.
Technology Secretary Liz Kendall said she still expected regulator Ofcom to establish the facts “fully and definitively” and said while the watchdog welcomed the new restrictions, it would continue its investigation seeking “answers about what went wrong and what is being done to fix it”.
The IWF has previously said it wants all nude software to be banned, and has argued that AI companies need to make their tools more secure before they are made available, and that governments should make this mandatory.
Children’s charity NSPCC said the IWF’s findings were “deeply worrying but also sadly predictable”.
Its chief executive, Chris Sherwood, said: “Criminals are using these tools to create extreme content on a scale never seen before, and make children pay for it.”
“Technology companies cannot continue to release AI products without building in important protections. They know the risks, and they know the harm that can be caused. It is their responsibility to ensure that their products are never used to create indecent images of children.”
“The UK Government and Ofcom must step in now and ensure technology companies are held to account.
“We are calling on Ofcom to use all the tools available through the Online Safety Act, and for governments to introduce a statutory duty of care to ensure that child safety is built into the design of their products and that generative AI services are essential to preventing these horrific crimes.”
“It is absolutely abhorrent that AI is being used to target women and girls,” Kendall said, adding that the government “will not tolerate this technology being weaponized to cause harm, which is why I have accelerated action to bring into force a ban on the creation of non-consensual AI-generated intimate images.”
She added: “AI should be a force for progress, not abuse, and we are determined to support the responsible use of AI to drive growth, improve lives and deliver real benefits, while also taking action when it is being misused.”
“That is also why we have introduced a world-class crime targeting AI models that have been trained or adapted to produce child sexual abuse material. Possession, provision or modification of these models will soon become a crime.”
The Lucy Faithful Foundation, which works to help offenders stop viewing images of child abuse, also said the number of people using AI to view or create images of abuse had doubled in the last year.
Young people who are concerned about their indecent images being shared online can use the free report removal tool at childline.org.uk/remove.
Safeguarding Minister Jess Phillips said: “The proliferation of AI-generated child abuse videos is frightening. The Government will not sit back and let predators produce this disgusting content.”
He added: “Tech companies have no more excuses. If they don’t act now, we will force them to do so.”
