Following global backlash, Grok will no longer be allowed to create AI photos of real people in sexual or revealing clothing.
“We have taken technical steps to prevent Grok accounts from allowing the editing of images of real people in revealing clothing, such as bikinis,” the X Safety account said in a blog post on the platform Wednesday. “This restriction applies to all users, including paid subscribers.”
The changes were announced hours after California’s top prosecutor, Rob Bonta, announced that he had opened an investigation into sexual AI deepfakes produced by Grok that included children. Bonta said that in recent weeks there has been a flurry of reports of Grok users taking photos of women and minors they find online and using AI models to undress them in the images.
Indonesia and Malaysia suspended Grok over the images, becoming the first countries in the world to ban the AI tool. British MPs publicly considered suspending him.
The social media company reiterated in a blog post Wednesday that as an additional safety measure, image creation and image editing capabilities through Grok on the X platform will be available only to paid users.
The company last week restricted non-paying users after complaints from government officials around the world, but was criticized for not going far enough.
A spokesperson for British Prime Minister Keir Starmer said this would “simply turn AI capabilities that enable the creation of illegal images into a premium service”.
Elon Musk, owner of xAI, the company that developed Grok, said the UK government wants “any excuse for censorship” in response to a post questioning why AI tools such as Gemini and ChatGPT are not being investigated.
On Wednesday, hours before X’s official account posted about a ban on creating sexual images, Musk urged users to bypass the AI model’s image restrictions.
Bonta’s office and Starmer’s office did not immediately respond to requests for comment from Business Insider.
