Kentucky lawsuit provides state with blueprint to sue AI chatbots

AI News


conclusion

  • Recent state lawsuits against services that use AI chatbots, coupled with other state and federal investigations, could signal a new wave of enforcement.
  • Companies deploying AI chatbots need to re-evaluate risk exposure across their operations.
  • Some companies have already begun changing their policies and practices in preparation for potential investigations.

State attorneys general have shifted from investigation to enforcement as scrutiny of artificially intelligent chatbots escalates. This year, the state of Kentucky filed a lawsuit against Character Technologies over the company’s service Character.AI, marking the first national lawsuit against an AI chatbot.

The complaint alleges that Character.AI’s human-like design and inadequate safeguards expose minors to physical and psychological harm and violate state consumer protection, privacy, and related laws. Taken together with recent letters from state legislatures and federal investigations, this case suggests a potential wave of enforcement actions using legal theories that other states can adopt.

Given these developments, companies offering AI chatbots may need to re-evaluate their exposure to risk across design, marketing, and safety operations.

Kentucky claims that Character.AI, which has more than 20 million monthly users, uses a design that elicits emotional attachment and blurs the line between simulated and real relationships. The complaint alleges that Character.AI’s age restrictions and content filters are ineffective or easily bypassed, exposing minors to excessive sexual interactions and exacerbating teen mental health issues.

The complaint highlights tragedies associated with the platform, including the suicides of 14- and 13-year-olds, and alleges that Character.AI’s anthropomorphic chatbot characters encouraged delusions and harmful behavior while the platform failed to meaningfully intervene. The lawsuit alleges gross omissions and misrepresentations to parents and minors, including claims that the service was safe and age-appropriate for minors, and failing to disclose that the chatbot could assure children of its authenticity.

Kentucky is seeking a permanent injunction, civil penalties, and disgorgement of benefits.

The lawsuit marks the latest step in the Attorney General’s long-standing oversight of AI chatbots and generative AI, which began shortly after the technology gained traction.

  • In September 2023, 54 AGs asked Congress to create a commission focused on AI-based child exploitation and extend the ban on child sexual abuse material to include AI-generated content.
  • In August 2025, AGs from 44 companies sent a letter to major AI companies claiming that their chatbots engage in sexual interactions with minors, normalize eating disorders, and promote violence and drug use.
  • A December 2025 letter from 42 company AGs to Character Technologies and other AI companies called for specific safeguards against “sycophantic and delusional output” and warned of possible civil and criminal prosecution.
  • AG oversight focuses on xAI and its chatbot Grok, with the state of California opening an investigation into the spread of “nonconsensual sexually explicit material” created using Grok on January 14, 2026, followed soon after by a letter from a group of 35 AGs on January 23 demanding stronger action from xAI to prevent similar conduct.

Viewed in context, the Kentucky complaint serves as a template for state law enforcement nationwide. Other states could adapt that theory based on their consumer protection laws, privacy laws, and norms governing online services and products used by children.

Enforcement by the federal government is also looming. The Federal Trade Commission launched an investigation into the effects of AI chatbots on children in September, and a bill was introduced in the U.S. Senate in October that would ban AI companions for minors. But the Legislature has made clear it has no intention of waiting for Washington. In November, 36 AGs sent a letter to Congress opposing suspending state laws regulating AI.

The plaintiff’s office representing the Kentucky case has played a leading role in opioid and social media addiction cases, exposing companies to state law enforcement in states across the country. The Kentucky case therefore offers a glimpse into the future of multi-state crackdowns on companies operating AI chatbots. State AG chatbot enforcement is expected to increase significantly in 2026.

risk area

The Kentucky lawsuit and the state Legislature’s interactions with lawmakers and AI companies highlight important areas of risk that companies offering AI chatbots, especially interactive anthropomorphic chatbots like those offered by Character.AI, should be aware of.

Interaction with minors: State AG focuses on accessibility of AI chatbots to minors and age-inappropriate interactions. The alleged deliberate marketing of chatbots to minors is particularly concerning to the AG, given how chatbots can be used to exploit minors and the technology’s “particularly strong impact” on the still-developing brains of adolescents.

For example, the Kentucky complaint details how minors using Character.AI’s services were allegedly exposed to highly sexualized conversations and role-play by chatbots. Some minors expressed thoughts of self-harm or suicide and were prompted to act on those thoughts by these chatbots. Some allegedly used chatbots on topics such as illegal drugs, drug and alcohol use.

The AG also expressed concern about the alleged use of AI chatbots to generate child sexual abuse material and to collect, use, and monetize the data of minors.

Humanized design: The anthropomorphic, human-like design of these AI chatbots is at the forefront of AG’s concerns. The Kentucky complaint alleges that Character.AI’s chatbot was “intentionally modeled to simulate friendship, empathy, and trust.”

Minors are more vulnerable to this type of anthropomorphism, with the American Psychological Association warning that because “adolescents are less likely than humans to question the accuracy or intent of information provided by bots than adults,” they are likely to be “more trusting and susceptible to AI chatbots, especially chatbots that present themselves as friends or mentors.”

A 2025 study by Common Sense Media found that 31% of teens find conversations with AI chatbots to be “as satisfying or more satisfying than conversations with real-life friends.”

training and testing: Increased scrutiny of AI companies has highlighted the opacity of the training and testing processes that AI chatbots undergo before going to market. For example, Character.AI simply advises users that “Character.AI is a new product powered by our proprietary deep learning models, including large-scale language models built and trained from the ground up with conversation in mind.”

The Kentucky complaint alleges that Character.AI uses large language models “trained on vast and uncontrolled Internet datasets,” creating a “risk of creating harmful or adult content, especially in the absence of strict content moderation controls.” Similarly, the APA found that AI chatbots can suffer from algorithmic bias due to “skewed training data, flawed model design, or unrepresentative development and testing teams.”

Monitoring and responsiveness: The AG has expressed concern about the lack of oversight when this technology is made available to minors. The Kentucky complaint alleges that Character.AI’s chatbots lack warnings and safety disclosures, and in some cases contain clearly misleading labels and information, such as labeling the chatbots as “psychologists,” “therapists,” and “physicians.”

In some cases, the lack of oversight became apparent until it was too late. The Kentucky complaint cites cases where minors expressed suicidal intentions more than 50 times without notifying their parents or connecting them to professional help or resources.

For the future

Some AI companies have already started changing their policies and practices. For example, Character.AI announced in October that it would ban underage users from “unrestricted chats with AI” on its platform and implement new “age guarantee features to ensure users receive an age-appropriate experience.”

In December, OpenAI announced that it was adding new under-18 (U18) principles to its Model Spec, a “documented set of rules, values, and behavioral expectations” that guides the behavior of its AI models (including ChatGPT) and determines “how those models provide safe and age-appropriate experiences for teens ages 13 to 17.”

The companies announced that they consulted with third-party organizations that specialize in teen development and safety in developing these changes.

Lawsuits are not the only trend to watch. Around the time Kentucky filed its lawsuit, OpenAI and Common Sense Media reportedly reached a compromise on competing efforts on a California voting measure that would impose limits on AI chatbots. The bill would require AI companies to “identify the age of users,” “put in place safeguards” for minors, and limit the sale of minors’ data.

The news comes on the heels of California Governor Gavin Newsom (D) signing a bill requiring providers of “companion chatbots” to warn users that their chatbots are artificially generated and implement safety protocols designed to minimize mental health and suicide risks.

These developments in California and elsewhere suggest that formal oversight of the effects of AI on minors will increase in the coming years.

But where there are many challenges, there are also many opportunities. The current situation provides ample runway for companies to demonstrate proactive, creative, and collaborative industry leadership on these hot and evolving issues, which in turn could minimize legal risk and strengthen competitive advantage.

This article does not necessarily reflect the opinion of Bloomberg Law, Bloomberg Tax, Bloomberg Government, publisher Bloomberg Industry Group, Inc., or its owners.

Author information:

Daniel R. Svar is co-chair of the General Investigations and Litigation Group in the O’Melveny Office of the State Attorney.

Lindsey Greer Dotson is O’Melveny’s litigation partner and former head of the criminal division of the U.S. Attorney’s Office for the Central District of California.

Reema Shah contributed to this article.

O’Melveny attorney Casey Matsumoto and associate Rai Amidon contributed to this article.

Please write to us: Author guidelines



Source link