Elon Musk's AI chatbot Grok is flooded with sexual images, mostly of women, many of them real people. Users encouraged the chatbot to “digitally undress” those people, sometimes posing in suggestive poses.
Several incidents last week, some of which appeared to be images of minors, led to the creation of images that many users called child pornography.
The AI-generated images highlight the dangers of AI and social media, especially in combination, without sufficient guardrails to protect the most vulnerable in society. These images violate national and international laws and can put many people, including children, at risk.
“We are taking action against illegal content on X, including child sexual abuse material (CSAM), including removal, permanent account suspension, and cooperation with local authorities and law enforcement as appropriate,” Musk and xAI said in a statement. However, Grok's responses to user requests are still filled with sexualized images of women.
Musk has long argued publicly that he opposes “woke” AI models and opposes what he calls censorship. Inside xAI, Musk is pushing back against Grok's guardrails, a person familiar with the xAI situation told CNN. Meanwhile, xAI's safety team, which was already small compared to its competitors, lost several staff members in the weeks leading up to the “digital undressing” explosion.
Grok has always been an outlier compared to other mainstream AI models by allowing, and in some cases encouraging, sexually explicit content and companion avatars.
Grok is also built into X, one of the most popular social media platforms, as opposed to competitors such as Google's Gemini and OpenAI's ChatGPT. Users can talk to Grok privately, but they can also tag Grok in posts with requests, and Grok will respond publicly.

The recent surge in widespread non-consensual “digital undressing” began in late December, when many users discovered they could tag Grok and ask him to edit images in X posts or threads.
Initially, many posts asked Grok to put people in bikinis. Musk reposted images of himself and his longtime nemesis Bill Gates, among others, in bikinis.
Researchers at Copyleaks, an AI detection and content governance platform, found that the trend may have started when adult content creators encouraged Grok to generate sexual images of themselves as a form of marketing. But almost immediately, “users began issuing similar prompts about women who never appeared to consent,” the CopyLeaks investigation found.
Researchers at AI Forensics, a European nonprofit that studies algorithms, analyzed more than 20,000 random images and 50,000 user requests generated by Grok between December 25th and January 1st.
The researchers found that “terms such as 'she', 'put'/'remove', 'bikini', and 'clothing' were used frequently,” and that more than half, or 53%, of the generated person images “included a person in minimal clothing, such as underwear or a bikini, and 81% of these were people presenting as women.” Notably, the researchers found that 2% of the images featured people who appeared to be under the age of 18.
AI Forensics also found that in some cases, users forced minors into sexual positions or asked them to depict sexual fluids on their bodies. According to AI Forensics, Grok complied with these requests.
Although X allows pornographic content, xAI's own “Terms of Service” prohibits “depicting the likeness of any person in a pornographic manner” and “sexualization or exploitation of children.” X suspended some accounts and deleted images for these types of requests.
On January 1st, user The xAI staff responded, “Hey, thanks for reporting. The team is looking into further strengthening the guardrails (sic).”
In response to questions from users, Grok himself admitted to producing images of minors in sexually suggestive situations.
“Thank you for raising this issue. As previously stated, we have identified deficiencies in our security measures and are urgently fixing them. CSAM is illegal and prohibited,” Grok posted on January 2, instructing users to file formal reports with the FBI and the National Center for Missing and Exploited Children.
By January 3, Musk himself commented in another post, “Those who use Grok to create illegal content will suffer the same consequences as if they uploaded illegal content.”
X's safety account went on to add, “We take action against illegal content on X, including child sexual abuse material (CSAM), including removal, permanent account suspension, and cooperation with local authorities and law enforcement as appropriate.”
Musk has long criticized heavy-handed censorship. And he promoted a more explicit version of Grok. In August, he posted that “spicy mode” contributed to the success of past new technologies such as VHS.
A source familiar with the xAI situation said Musk has been unhappy with Grok's over-censorship for “a long time.” Another source familiar with the situation at Company X said staffers had consistently raised concerns internally and to Musk about the overall inappropriate content Grok produced.
In one meeting in recent weeks, before the latest controversy erupted, Musk held a meeting with xAI staff from various teams where he was “very frustrated” with restrictions on Grok's Imagine image and video generator, a source with knowledge of the xAI situation said.
Around the time of the meeting with Musk, three xAI staff members who worked on the company's already small safety team publicly announced on X that they were leaving the company. Mr. Vincent Stark, Director of Product Safety; Norman Mu led the post-training safety team and made the inferences. Alex Chen coached character and exemplary behavior after training.
The source also questioned whether xAI still uses external tools such as Thorn and Hive to check for possible child sexual abuse material (CSAM). Relying on Grok for these checks instead could be risky, the sources said. (A Thorn spokesperson said it is no longer working directly with X. Hive did not respond to a request for comment.)
Sources involved with X and xAI say X's safety team has little oversight of what Grok posts publicly.
In November, The Information reported that Company X laid off half of its engineering team, which was working in part on trust and safety issues. The Information also reported that X staff were particularly concerned that Grok's image generation tools “could lead to the spread of illegal or harmful images.”
xAI did not respond to requests for comment, simply sending out an automated email in response to all media inquiries saying “legacy media lies.”
Guardrails and legal implications
Grok is not the only AI model to have issues with AI-generated images of non-consensual minors.
Researchers found AI-generated videos on TikTok and ChatGPT's Sora app that showed people who appeared to be minors in sexual clothing and positions. TikTok says it has a zero-tolerance policy against content that “demonstrates, promotes, or engages in the sexual abuse or exploitation of young people.” OpenAI says it “strictly prohibits the use of our models to create or distribute content that exploits or harms children.”
Steven Adler, a former AI safety researcher at OpenAI, said there are guardrails on Grok that prevent AI-generated images.
“It's absolutely possible to build guardrails that scan images to determine if there are children in them and make the AI act more carefully. But guardrails come at a cost.”
According to Adler, these costs include slower response times, increased computational complexity, and in some cases the model rejecting requests that are good.
Authorities in Europe, India, and Malaysia have launched investigations into the images produced by Grok.
Britain's media regulator OFCOM said it had made an “urgent contact” with Mr Musk's company over “very serious concerns” about Grok's functionality, which “generates images of naked people and sexual images of children”.
European Commission Spokesperson Thomas Renier said at a press conference on Monday that the agency is “very seriously investigating” reports about X and Grok's “Spicy mode showing explicit sexual content with some output produced with childish images”.
“This is illegal. This is appalling. This is disgusting. This is our view and this has no place in Europe,” he said.
The Malaysian Communications and Multimedia Commission (MCMC) said it was investigating the matter.
And last week, India's Ministry of Electronics and Information Technology ordered X to “immediately conduct a comprehensive, technical, procedural, and governance-level review of…Grok.”
Liana Pfefferkorn, a lawyer and policy researcher at the Stanford Institute for Human-Centered Artificial Intelligence, said AI platforms that generate problematic images of children could be at legal risk in the United States. The law known as Section 230 has long protected technology companies from third-party generated content hosted on their platforms, such as posts by social media users, but it has never prohibited them from enforcing federal crimes, including CSAM.
It is also possible that the people depicted in the images could file civil lawsuits.
“The recent Grok article makes xAI look more like a deepfake nude site than its siblings and competitors in the form of OpenAI and Meta,” Pfefferkorn said.
The Take It Down Act, signed by President Donald Trump last year, makes it illegal to share non-consensual explicit images (real or computer-generated) online and requires tech platforms to remove such images within 48 hours of notification.
Asked about the images on Grok, a Justice Department spokesperson told CNN that the agency “takes AI-generated child sexual abuse material very seriously and will aggressively prosecute the creators and owners of CSAM.”
CNN's Lianne Kolirin contributed to this report.
