In summary
Attorney General Rob Bonta says he is investigating whether Elon Musk’s company’s new AI image editing tool violates California law.
California Attorney General Rob Bonta today announced an investigation into how and whether Elon Musk’s
xAI reportedly updated its Grok artificial intelligence tool last month to enable image editing. Users of social media platform X connected to the tool began using Grok to remove clothing from photos of women and children.
“The volume of reports detailing the non-consensual and sexually explicit material produced and posted online by xAI in recent weeks is shocking,” Bonta said in a written statement. “This material, which depicts women and children naked and in sexually explicit situations, is being used to harass people on the internet. I urge xAI to take immediate action to ensure this does not happen again.”
Bonta urged Californians who wish to report depictions of themselves or their children in undress or sexual activity to visit oag.ca.gov/report. xAI did not respond to questions about the study in an email response.
A study obtained by Bloomberg found that X users of Grok post more non-consensual nude or sexual images than any other website. In a post on X, Musk promised “punishment” for those who use the tool to create illegal content. Grok on Friday limited image editing to paid members.
One possible route for Bonta to prosecute xAI is through a law that went into effect just two weeks ago that establishes legal liability for the creation and distribution of “deepfake” pornography.
X and xAI appear to violate provisions of the law, known as AB 621, said Sam Daudurian, who previously worked in the Los Angeles District Attorney’s Office’s sex crimes unit and now works in private practice as a lawyer in cases involving deepfakes and revenge porn.
The law’s author, Rep. Rebecca Bauer-Kahan, told CalMatters last week that she contacted prosecutors, including the Attorney General’s Office and the San Francisco City Attorney, to remind them that they can act under the law. What’s happening with X is what AB 621 was designed to address, Bauer-Kahan said.
“Real women are having their images altered without their consent, and the psychological and reputational damage is devastating,” the San Ramon Democratic Party said in an emailed statement. “Minor children are having their images used to create child sexual abuse material, and these websites knowingly encourage it.”
global concern
Bonta’s investigation also comes on the heels of calls for an investigation by Gov. Gavin Newsom, pushback from regulators in the European Union and India, and bans on X in Malaysia, Indonesia and possibly the United Kingdom. Lawmakers and advocates are calling on smartphone manufacturers to ban the Grok app as downloads increase in Apple and Google’s app stores.
It’s unclear why Grok created this feature in the way it did and how it will respond to the controversy surrounding it, and the answer may never come as recent analysis concludes that it is the least transparent of any major AI system currently available. xAI did not respond to questions about the study from CalMatters.
“The psychological and reputational damage is devastating.”
Rebecca Bauer-Kahan, Democratic state representative from San Ramon
There is mounting evidence of concrete damage caused by deepfakes. In 2024, the FBI warned that the use of deepfake tools to blackmail young people is a serious problem that has led to cases of self-harm and suicide. Multiple audits have found that AI models contain child sexual abuse material within their training data and can generate indecent photos. According to a 2024 Center for Democratic Technology study, 15% of high school students have heard or seen sexually explicit images of an acquaintance at school in the past year.
The investigation announced today is the latest action by the Attorney General to force AI companies to keep children safe. Late last year, Bonta sponsored a bill that would ban chatbots that talk about self-harm or have sexually explicit conversations from interacting with people under 18. Bonta also joined attorneys general from 44 other states in sending a letter questioning why companies like Meta and OpenAI are allowing their chatbots to engage in sexually inappropriate conversations with minors.
California has passed about half a dozen laws since 2019 to protect people from deepfakes. Bauer-Kahan’s new law amends and strengthens a 2019 law, most importantly allowing district attorneys to sue companies that “recklessly aid and abet” the distribution of deepfakes without the consent of a person engaged in nudity or sexual activity. That means ordinary people can ask the attorney general or district attorney where they live to file a lawsuit on their behalf. It would also increase the maximum amount a judge can award to an individual from $150,000 to $250,000. Under the law, prosecutors do not have to prove that the individuals depicted in the AI-generated nude or sexual images were actually harmed to bring a case. Websites that refuse to comply within 30 days may be subject to fines of $25,000 per violation.
In addition to these measures, two 2024 laws (AB 1831 and SB 1381) expand the state’s definition of child pornography and make it illegal to possess or distribute artificially generated child sexual abuse material. Another requires social media platforms to provide an easy way for people to request the immediate removal of deepfakes, and defines the posting of such material as a form of digital identity theft. A California law restricting the use of deepfakes in elections was signed into law last year but was struck down by a federal judge last summer following a lawsuit by Mr. X and Elon Musk.
Future reforms
While all the new state laws will help give lawyers like Dodurian new avenues to combat the harmful use of deepfakes, he said more needs to be done to help people protect themselves. He said his clients faced the challenge of proving a violation of existing laws because the explicit material would need to be distributed on messaging apps, social media platforms, etc. for the protections to be effective. In his experience, people using stripping apps usually know each other, so distribution doesn’t always happen, and even if it does, it can be difficult to prove.
For example, he said he has a client who works as a nanny and claims that the father of the children she is caring for created an image of her using a photo she posted on Instagram. The nanny found the image on her iPad. This discovery was disturbing to her and caused her emotional trauma, but since he cannot use deepfake law, she must sue based on negligence or emotional distress, and laws that were never created to deal with deepfakes. Similarly, victims told CNBC last year that the distinction between creating and distributing deepfakes leaves a gap in laws in many U.S. states.
“The law needs to stay informed of what’s actually happening on the ground and what women are going through. It’s simply the act of creation itself that matters,” Daudourian said.
California has been at the forefront of enacting legislation to protect people from deepfakes, but existing laws are not up to the task, said Jennifer Gibson, co-founder and director of Psst, an organization founded a little more than a year ago to provide free legal services to tech and AI workers interested in whistleblowing. A California law that went into effect on January 1 protects whistleblowers within AI companies, but only when addressing catastrophic risks that could cause more than 50 deaths or more than $1 billion in damages. If the law protects those working on deepfakes, Gibson said, a former X employee who last year detailed to Business Insider that he witnessed Grok producing illegal sexually explicit material would be protected if he shared that information with authorities.
“We need more protection for exactly this type of scenario. Insiders need to know that this is foreseeable, that this will happen, and that they have a place to report to both to hold companies accountable and protect the public.”
Source link
