As the world focuses on artificial intelligence AI, experts have warned that people should not rely on AI chatbots or seek advice through this medium, especially when it comes to medical, legal, and financial issues.
Some U.S. lawyers are telling their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is at stake.
Those warnings took on added urgency this year after a federal judge in New York ruled that AI chat could not be protected from prosecutors pursuing securities fraud charges against the former CEO of a bankrupt financial services company.
In response to the ruling, lawyers advised that conversations with chatbots, such as Anthropic’s Claude and OpenAI’s ChatGPT, could be requested by prosecutors in criminal cases and by litigants in civil cases.
“We tell our clients, ‘We should proceed with caution here,'” said Alexandria Gutierrez Sweat, a lawyer at the New York-based law firm Kobre & Kim.
Discussions between people and their attorneys are, in most cases, considered confidential under U.S. law.
However, AI chatbots are not lawyers, and lawyers are instructing their clients to take steps that can keep communications with AI tools more private.
In emails to clients and advisories posted on their websites, more than a dozen major U.S. law firms have outlined advice for individuals and businesses to reduce the likelihood that AI chats end up in court.
Similar warnings are also written into employment contracts between some companies and their customers.
For example, New York-based law firm Shah Tremonte said in a recent client agreement that sharing an attorney’s advice or communications with a chatbot could erase a legal protection known as attorney-client privilege that typically protects communications between attorneys and clients.
Voluntary disclosure of information from your attorney to third parties may jeopardize customary legal protections for communications with your attorney.
Manhattan-based U.S. District Judge Jed Rakoff ruled in February that Heppner must hand over 31 documents related to the case generated by Anthropic’s chatbot Claude.
“There is no attorney-client relationship that exists or can exist between AI users and platforms like Claude,” Lakoff wrote.
Courts are already grappling with the increasing use of artificial intelligence by lawyers and litigants, leading to the filing of lawsuits, including, among other things, cases fabricated by AI.
ChatGPT and other generative AI programs “are tools, not people,” the lawyer said.
It is worth noting that OpenAI and Anthropic representatives have not yet immediately responded to the allegations, even though OpenAI and Anthropic’s privacy and terms of use state that both companies can share data about their users with third parties.
Additionally, both AI platforms also say they require users to consult a qualified professional before seeking legal advice from a chatbot.

