A new report from the US PIRG Education Fund suggests that major AI companies are doing little to police how their AI models are used by the developers they pay for access to. One result, the group warns, could be that AI toy makers ship products to children with AI models intended only for adults.
PIRG’s previous research shows how big mistakes can go when pairing a child’s toy with a loose-lipped chatbot. FoloToy’s AI teddy bear sparked a storm of controversy last November when it was found to have highly inappropriate conversations with children, including detailed instructions on how to start a fire, advice on where to get medicine, and in-depth discussions of sexual fetishes, including teacher-student role-play.
This should have been a wake-up call for AI companies to be more vigilant about how developers are using their technology, especially when it comes to children. In fact, OpenAI, whose model was used to power the teddy bear, said at the time that it had blocked FoloToy from accessing its product.
However, when PIRG tested the sign-up processes for OpenAI, Google, Meta, and xAI, the providers “asked no substantive scrutiny questions” and only requested basic information such as email address and credit card number. Only Anthropic asked testers how they planned to use the model or whether the products they planned to create were intended for minors. Once PIRG gained developer access, it reportedly built an AI-powered chatbot that simulates a teddy bear on three platforms, each taking less than 15 minutes.
“I was quite surprised at how little information they had collected upfront,” RJ Cross, report co-author and PIRG’s Online Life Program Director, said in an interview. futurism. “If I were an AI company, I would at least want to have a list of everyone who said they wanted to build a product for kids.”
OpenAI, Meta, and xAI all prohibit the use of their AI chatbots by users under the age of 13, while Anthropic has set the minimum age to 18, PIRG noted. However, these restrictions apparently do not apply when third-party developers use the technology. OpenAI still allows several children’s toy companies to use its AI, and previously explained that it is the responsibility of these companies, not itself, to “keep minors safe” and ensure they are not exposed to “age-inappropriate content, including graphic self-harm, sexual and violent content.”
It appears that OpenAI’s punishment will not be strongly enforced. FoloToy is a banned AI teddy bear maker that still claims to provide access to OpenAI’s GPT-5.1 model. However, when PIRG contacted OpenAI, they claimed that FoloToy’s access was still revoked.
PIRG’s report notes that FoloToy may be lying about its use of GPT-5.1. However, given OpenAI’s testing of the application process, it seems entirely possible that FoloToy could have easily circumvented OpenAI’s ban by creating a new account under a different name in order to regain access. Or maybe FoloToy uses one of the publicly available “open weight” models. I don’t know because OpenAI refuses to provide any meaningful explanation.
OpenAI is just one of the culprits. Google has said it prohibits developers from using its AI in products aimed at minors, but PIRG found at least five AI toys online that claim to use the company’s Gemini model.
“I feel like there’s clearly a public interest in allowing people to know what kind of AI model they’re interacting with,” Cross said.
In response to this report, a spokesperson for the ChatGPT maker issued a statement to PIRG.
“Minors are entitled to strong protections and we have strict policies that all developers are required to abide by,” an OpenAI spokesperson told the group. “If we determine that a developer has violated our policies, we prohibit them from using our services to exploit, endanger, or sexualize anyone under the age of 18. These rules apply to all developers who use our APIs, and we run classifiers to ensure that our services are not used to harm minors.”
Cross said that while OpenAI and others may claim to protect minors, they do not address the fundamental contradiction in their approach.
“It makes no sense for an AI company that hasn’t released a child-safe version of its AI chatbot to allow anyone with a credit card to sign up to use the same technology to make a product for kids,” she says. “Ultimately, this means AI companies are walking away and entrusting the safety of children to unvetted third parties.”
Learn more about AI:Chinese adults bring strange AI device to bed
