The amount of AI-generated child sexual abuse content discovered online will skyrocket in 2025 | AI (Artificial Intelligence)

AI Video & Visuals


The amount of AI-generated child sexual abuse material found online has increased by 14% in the last year, with the majority of videos showing the most extreme content, according to the safety watchdog.

The Internet Watch Foundation announced that it has identified 8,029 realistic AI-generated images and videos of child sexual abuse. (CSAM) announced in 2025. It added that the number of videos has increased more than 260 times.

The IWF said 65% of the 3,443 videos were classified as Category A, which represents the most stringent material under UK law. The corresponding figure for non-AI videos is 43%, indicating the technology is being used to create more violent content, the watchdog said.

Kelly Smith, Chief Executive of IWF, said: “Advances in technology must never come at the expense of children’s safety and well-being. AI has a lot to offer for good, but it’s frightening to think that its power could be used to destroy a child’s life. This material is dangerous.”

One IWF analyst said conversations between pedophiles on the dark web showed that the technological innovation was “well received” by CSAM users. The discussion centers on the increasingly realistic output of AI systems and their ability to add audio to videos and successfully manipulate images of real children known to criminals as they improve.

The UK-based IWF operates a hotline and has global powers to monitor child sexual abuse content. The report said criminals are also discussing the possibility of using “agent” systems that can perform tasks autonomously.

In the UK, technology companies and child protection agencies have been given powers to test whether AI tools can generate CSAM, with ministers last year saying the aim was to stop abuse before it happens.

Under the changes, the government will give designated AI companies and child safety organizations permission to investigate generative artificial intelligence models, the underlying technology behind chatbots such as ChatGPT and image generators such as Google’s Veo 3, and ensure safeguards are in place to prevent the creation of such material.

“The children, the victims, the survivors, we cannot afford to be complacent,” Smith said. “New technology must be held to the highest standards. In some cases, lives can be at risk.”

As systems become more proficient and available, the amount of CSAM validated by IWF is increasing rapidly, especially video.

The IWF also published a poll showing that eight in 10 UK adults want the UK government to introduce legislation to ensure AI systems are developed with safety as a top priority and “do no harm in the future”. Last year, the government announced a ban on the possession, creation and distribution of AI models designed to produce child sexual abuse material.



Source link