(InvestigateTV) — Artificial intelligence is unlocking new creative possibilities, but it can also open the door to risks, including for children.
The National Center for Missing and Exploited Children received over 440,419 calls regarding the use of generative artificial intelligence (AI) in the first half of 2025. This compares to 6,835 cases during the same period in 2024.
Experts say the sharp increase reflects a growing trend in crimes using AI to create explicit content involving children, and that the technology poses new risks to children’s well-being.
Statistics represent more than numbers. Experts say they represent children at risk.
Mississippi teacher case highlights misuse of AI
The effects of human exploitation also reached Corinth, Mississippi.
A teacher in the East Memphis community of about 15,000 people is accused of using AI to create explicit videos of female students between the ages of 14 and 16, police said.
The girls were never actually filmed. The video was a deepfake created using AI.
The teacher told investigators that he took photos from the girls’ social media accounts and created the videos using an easily accessible AI generator, according to a federal arrest affidavit.
“It’s really hard because these are the guys I grew up with and the kids are friends,” said Corinth Police Chief Landon Tucker.
The new interim superintendent said the district has updated its policies to address technology and artificial intelligence.
New technology creates new challenges
“This is a very new problem. It’s a new problem for parents. It’s a new problem for teenagers. It’s a new problem for legislators,” said Chris McKenzie, president of Responsible Innovation USA.
The nonprofit successfully lobbied Congress to pass the Take It Down Act, which criminalizes the digital forgery of child sexual abuse material and requires social media platforms to remove content within 48 hours of notification.
“That means our response time has been significantly reduced. Previously, many targeted teens had nowhere to turn for help,” McKenzie said.
Law enforcement adapts to evolving threats
New advances in AI technology are making it harder for law enforcement and parents to keep up, said Taneka Blackwell, special agent in charge of the FBI’s field office in Memphis.
“Parents should log out of their social media accounts and log into their child’s accounts from time to time, as parents can be the first line of defense for people who seek to exploit or groom their children,” she said.
In July 2025, Tennessee made it a felony to possess or distribute synthetic or digitally created images of minors in the context of pornography. Arkansas and Mississippi have similar laws.
The law creates an area for prosecution even when no actual contact occurs between predator and victim.
“These images are there forever, so it’s not victimless,” Blackwell said. “You know, it’s about kids growing up, going to college, applying for scholarships, applying for jobs. If you search for images of their kids on Google, you might see that information, but they had no idea. They didn’t consent to it being created in their likeness.”
Experts said bad actors will also use AI to make cyberattacks more evasive and difficult to counter.
Read the full article by Kelli Cook here.
How to report a concern
From here you can report your concerns to the National Center for Missing and Exploited Children’s tip line.
mAnIpulated: InvestigateTV series on the impact of AI
Learn more about AI and how the mAnIpulated series can tell what’s real and what’s fake.
Will you be able to find the AI? TEST YOUR SKILLS
Can you spot AI-generated images in an interactive digital game? You can play this game and others on the mAnIpulated series home page.
Watch InvestigateTV to read more
Copyright 2026 Gray Media Group, Inc. All rights reserved.
