Tipping point in online child abuse

AI Video & Visuals


New data suggests there may be more child pornography online in 2025 than at any time in history. Last year, a record 312,030 reports of confirmed child pornography were investigated by the Internet Watch Foundation. The Internet Watch Foundation is a UK-based organization that works to identify and remove child pornography from the web around the world.

This in itself is a cause for concern. This means the total amount of child pornography detected on the internet has increased by 7% since the previous record was set in 2024. However, the significant increase in AI-generated child pornography, especially videos, is also alarming. At first glance, the proliferation of AI-generated depictions of child sexual abuse may give the false impression that no harm is being done to children. This is not true. AI-generated abusive images and videos feature and victimize real children, either because the models were trained on existing child pornography or because the AI ​​was used to manipulate real photos and videos.

Today, IWF reported that it discovered 3,440 AI-generated videos of child sexual abuse in 2025. Social media, encrypted messaging, and dark web forums have led to a steady increase in child sexual abuse content for years, but now generative AI is dramatically exacerbating the problem. It is very likely that another terrible record will be set in 2026.

Of the thousands of AI-generated videos of child sex abuse discovered by IWF in 2025, almost two-thirds were classified as “Category A” (the strictest category, which includes penetration, sexual torture, bestiality, etc.). The remaining 30% were Category B, depicting non-penetrative sex acts. With this relatively new technology, “criminals essentially own their own child sexual abuse machine and can create whatever they want to see,” IWF chief executive Kelly Smith said in a statement.

The amount of AI-generated images of child sexual abuse has been increasing since at least 2023. For example, IWF found that in just one month in early 2024, users uploaded more than 3,000 AI-generated images of child sexual abuse on just one dark web forum. In early 2025, the digital safety nonprofit Thorne reported that of a sample of more than 700 U.S. teenagers it surveyed, 12 percent knew someone who had been the victim of a “deepfake nude.” The uptake of AI-generated videos depicting child sexual abuse has lagged behind these photos because AI video generation tools were much less photorealistic than image generators. “When AI videos weren’t realistic or sophisticated, criminals didn’t bother producing them in large numbers,” said IWF spokesperson Josh Thomas. That has changed.

Last year, OpenAI released the Sora 2 model, Google released Veo 3, and xAI released Grok Imagine. Meanwhile, other organizations have created many advanced open source AI video generation models. These open source tools are typically free for anyone to use and have far fewer, if any, safeguards. There are almost certainly AI-generated videos and images of child sexual abuse, but because they are created and stored on personal computers, authorities would never be able to detect them. Abusers can operate in secrecy because they no longer have to find and download such materials online and potentially expose themselves to law enforcement.

OpenAI, Google, Anthropic, and several other top AI labs have joined efforts to prevent child sexual abuse using AI, and all major labs say they are taking steps to prevent the use of their tools for such purposes. Still, safety devices can be breached. In the first half of 2025, OpenAI reported more than 75,000 instances of child sexual abuse and child endangerment on its platform to the National Center for Missing and Exploited Children. This was more than double the number reported in the second half of 2024. An OpenAI spokesperson told me that the company designs its products to prohibit the creation or distribution of “content that exploits or harms children” and to take “action when violations occur.” The company reports all cases of child sexual abuse to NCMEC and bans related accounts. (OpenAI has corporate partnerships) atlantic ocean. )

In other words, the advances and ease of use of AI video generators provide a gateway for exploitation. This dynamic has been exposed in recent weeks as Elon Musk’s AI model, Grok, has been used by people to publicly generate potentially hundreds of thousands of non-consensual sexual images, primarily of women and children, on his social media platform X. (Musk claimed he had “no knowledge of the naked images of minors generated by Grok” and blamed users for making illegal requests, while his employees quietly rolled back parts of the tool.) While investigating the dark web, the IWF discovered: In some cases, Grok appeared to be used to create abusive depictions of children between the ages of 11 and 13, which were then fed into more permissive tools to produce darker, more explicit content. “The easy availability of this material only emboldens those with a sexual interest in children” and “furthers its commercialization,” Smith said in an IWF press release. (Yesterday, the X safety team said it was restricting its ability to generate images of users in skimpy clothing and was working with law enforcement “as appropriate.”)

There are signs that the AI-induced child sexual abuse crisis is worsening. A growing number of countries, including the UK and the US, have passed laws making it illegal to create and publish such material, but in practice prosecutions of perpetrators have been slow. Meanwhile, Silicon Valley continues to evolve at a breakneck pace.

All new digital technologies have been used to harass and exploit people. The age of AI sexual abuse was predictable a decade ago, but it is still here. AI executives, engineers, and experts like to say that today’s AI models are the least efficient ever. Similarly, AI’s ability to abuse children is likely to worsen in the future.



Source link