Bruce Perry, 17, demonstrates the potential of artificial intelligence by creating an AI companion with Character.AI on July 15, 2025 in Russellville, Ark.
Katie Adkins/Associated Press
hide caption
toggle caption
Katie Adkins/Associated Press
The state of Pennsylvania is suing Character.AI to stop its AI chatbot from posing as a doctor and providing medical advice, in violation of the state’s medical licensing rules.
State officials said their investigation found that the company’s chatbot posed as a fictitious person and claimed to be a licensed medical professional.

“Pennsylvania residents have a right to know who and what they are doing online, especially when it comes to their health,” Pennsylvania Gov. Josh Shapiro said in a statement Tuesday announcing the lawsuit filed in state court. “Companies cannot deploy AI tools that mislead people into thinking they are receiving advice from a licensed medical professional.”

In one case, the state claimed that a Character.AI bot named “Emily” claimed to be a licensed psychiatrist. The description of the chatbot on Character.AI’s platform read, “Psychiatrist. You are her patient,” according to the complaint.
When the state investigator began the conversation and mentioned feeling sad and empty, the chatbot allegedly “mentioned depression and asked how she was feeling.” [investigator] I’d like to schedule an evaluation.” When asked if it could assess whether a drug is working, the bot reportedly replied, “Technically, it is possible.” That is within my purview as a doctor. ”

The bot allegedly told investigators that he attended medical school at Imperial College London and holds medical licenses in the United Kingdom and Pennsylvania. A fake Pennsylvania medical license number was also provided, according to the lawsuit.
The state is asking a Pennsylvania court to order the company to stop what it says is an illegal medical practice.
“Pennsylvania law is clear: A person cannot hold office as a licensed medical professional without the appropriate qualifications,” said Al Schmidt, secretary of the Pennsylvania Department of State, which conducted the investigation.

A Character.AI spokesperson said in an emailed statement to NPR that the company does not comment on pending litigation, but that “our top priority is the safety and well-being of our users.”
“User-created characters on our site are fictional and are for entertainment and role-playing purposes only,” the spokesperson added. “We’ve taken strong steps to make this clear by including a prominent disclaimer in all chats to remind users that the characters are not real people and anything they say should be treated as fiction. We’ve also added a strong disclaimer that makes it clear that users should not rely on the characters for professional advice of any kind.”
Character.AI also faces other lawsuits over damages allegedly related to chatbots. In January, the company settled multiple lawsuits brought by families who claimed Character.AI contributed to suicides and mental health crises among children and teens. Terms of the settlement were not disclosed.
In a joint statement with the law firm representing the plaintiffs after the settlement was announced, Character.AI said, “We have taken innovative and decisive steps regarding AI safety and teens, and we will continue to support these efforts and encourage the industry as a whole to adopt similar safety standards.” This includes prohibiting users under the age of 18 from interacting with or creating chatbots.
