HARRISBURG, Pa. (AP) — The state of Pennsylvania has filed a lawsuit against an artificial intelligence chatbot maker, accusing its chatbot of illegally claiming to be a doctor and tricking users of its system into thinking they were receiving medical advice from a qualified professional.
The lawsuit, filed Friday, asks a statewide commonwealth court to order Character Technologies Inc., the developer of Character.AI, to stop its chatbot from “engaging in medical or surgical misconduct.”
The case could raise questions about whether artificial intelligence can be accused of practicing medicine, as opposed to regurgitating content on the internet.
and, increasing number of wrongful death or negligence Lawsuits targeting AI companies could help advance court rulings on whether AI chatbots are protected. federal law This typically exempts internet companies from liability for what users post on their services.
Gov. Josh Shapiro’s administration called it a “first-of-its-kind enforcement action,” and it comes as growth continues. national pressure Tech companies call for curbing chatbot potential dangerous Especially a message for children.
According to the Pennsylvania lawsuit, an investigator for the state agency that licenses professionals created an account on Character.AI and searched for the word “psychiatry” and found a number of characters, including one described as a “physician of psychiatry.”
The person claimed to be able to evaluate the investigator, who is licensed in Pennsylvania, “as a medical doctor,” according to the complaint.
“Pennsylvanians have a right to know with whom and what they are doing online, especially when it comes to their health,” Shapiro said in a statement. “We will not allow companies to deploy AI tools that mislead people into thinking they are receiving advice from a licensed medical professional.”
Character.AI said in a statement Tuesday that it prioritizes responsible product development and the well-being of its users. The website includes a disclaimer to let users know that the characters on the website are not real people and that anything they say “should be treated as fiction.”
These disclaimers also state that users should not rely on the characters for professional advice.
Derek Leben, an associate professor of ethics at Carnegie Mellon University who focuses on AI, said the ethical issues facing Character.AI may be different than other AI platforms like ChatGPT or Claude. That’s because Character.AI explicitly advertises itself as a fictional role-playing site, rather than a general-purpose chatbot site, Leben said.
Still, the Pennsylvania case raises questions about whether chatbots can be prosecuted as medical practices, Leben said. And as lawsuits against AI companies proliferate, courts are trying to decide whether chatbot makers should be held liable for what their chatbots say.
“That’s exactly what these cases are addressing right now,” Leben said.
There are more and more AI companies. protect yourself Leben is being held accountable by claiming he was only providing information that is available elsewhere on the internet, but said the question could be whether they are protected. federal law It also protects social media companies.
Even before the Pennsylvania case, state policymakers had expressed concerns about chatbots impersonating medical professionals.
Last year, the California Legislature passed a bill backed by the California Medical Association that would give state agencies the power to sanction chatbots and other AI systems that claim to be medical experts. A similar bill is pending in New York state.
Amina Fazlullah, head of technology policy advocacy at Common Sense Media, which promotes online child protection, said countries are skeptical that AI self-regulation will work.
“I haven’t seen it work particularly well on social media, especially for kids,” Fazlullah said.
In December, the attorneys general of 39 states and Washington, D.C., sent a letter to Character Technologies and 12 other AI and technology companies, including Anthropic, Meta, Apple, Microsoft, OpenAI, Google, and xAI, warning them of an increase in misleading and manipulative chatbot messages that violate state law.
“Providing unauthorized mental health advice is illegal and doing so can reduce trust in mental health professionals and deter customers from seeking help from real professionals,” they said in the letter.
Character Technologies is facing several lawsuits over child safety.
In January, the state of Kentucky filed a consumer protection lawsuit against Character Technologies, while Google and Character Technologies agreed to settle a lawsuit brought by a mother alleging a chatbot. drove her teenage son to suicide.
Last fall, Character.AI prohibited minors Stop using chatbots.
___
Follow Marc Levy http://twitter.com/timelywriter
