Elon Musk’s xAI sued by teens over sexually explicit AI-generated deepfakes

AI For Business


Three Tennessee teenagers sued Elon Musk’s xAI this week, claiming the company’s image generation tools were used to transform real photos into overtly sexual images.

The high school students, who are suing under pseudonyms, filed the lawsuit in California, where Musk’s artificial intelligence company, xAI, is headquartered. They are seeking class action status to represent thousands of other victims who, like them, are minors or were minors at the time the sexually explicit images were created, according to the lawsuit.

According to the complaint, Jane Doe #1 received an anonymous tip in December that someone was distributing sexually explicit images of her on a social media website.

“At least five of these files (one video and four images) depicted her actual face and body in settings she was familiar with, but morphed into sexually explicit poses,” the complaint states. The person who distributed the image knows Doe and claims to have used xAI’s image generation tools to manipulate a real photo of her into a sexually abusive photo. One of the images was taken from a homecoming photo. The other is an excerpt from my high school yearbook.

The person who distributed the images also created explicit images of at least 18 other girls, two of whom are co-plaintiffs in the lawsuit. In late December, police arrested a man believed to be the culprit and seized his cell phone. They discovered that he uploaded the images to several platforms where he exchanged them with sexually explicit images of other minors.

Other AI companies prohibit their image generation tools from creating sexually explicit content, even for adults. Musk saw this as a business opportunity and touted the ability of xAI’s Grok chatbot to create “spicy” content, the lawsuit alleges. However, the lawsuit alleges that there is currently no way to prevent the generation of explicit images of adults while completely preventing the generation of images of children. xAI also alleges that it knew Grok could create sexually explicit images of children and yet released them.

The lawsuit alleges that those who distributed the plaintiffs’ images used applications that licensed xAI technology or “otherwise purchased access to Grok and were used as intermediaries and intermediaries.”

XAI did not respond to an email seeking comment from The Associated Press. But a Jan. 14 post about the controversy on social media platform X said: “We remain committed to making X a safe platform for everyone and have zero tolerance for any form of child sexual exploitation, non-consensual nudity, or unwanted sexual content.”

“We take steps to remove high-priority violating content, such as child sexual abuse material (CSAM) and non-consensual nudity, and take appropriate action against accounts that violate the X-Rule. We also report accounts soliciting child sexual exploitation material to law enforcement as appropriate.”

Meanwhile, the students who took part in the lawsuit said they feared the images they created would remain on the internet forever. They fear being stalked because their real names and school names are attached to the files. They worry that their friends or classmates have seen photos and videos that look real, and they worry about who will see them in the future.

Jane Doe 1 said she suffers from anxiety, depression and stress. “She has difficulty eating and sleeping and suffers from recurring nightmares,” the complaint states. Jane Doe #2 “has begun self-isolating, avoiding being on school campuses, and is even afraid to attend her own graduation ceremony.” According to the complaint, Jane Doe #3 suffers from constant fear and anxiety that someone will see her face in the AI-generated image and recognize her face.

Lawler writes for The Associated Press.



Source link