How can lawyers stop AI from hallucinating? Of course, there will be more AI.

AI For Business


Law firm Cozen O'Connor has a rule against using publicly available chatbots to prepare legal filings. But after a judge penalized two of the company's lawyers for citing fake cases, the company is adding an additional protection: an AI hallucination detector.

Cozen O'Connor is currently testing software created by a startup called ClearBrief that scans legal briefs for fabricated facts and generates reports. Think spell check. However, instead of flagging typos, it finds fictional examples and quotes that the generator sometimes invents.

“You have to be realistic,” says Kristina Bakardziev, a partner at Cozen O'Connor who is tasked with leveraging technology to serve lawyers and their clients. He said lawyers will tinker with chatbots regardless of whether the tools are licensed or not.

The legal field, plagued by embarrassing hallucinations of AI, has decided to ban general purpose chatbots and AI assistants. But it's hard to stop curious employees from pasting their drafts into free browser-based chatbots like ChatGPT, Claude, and Gemini. Law firms and legal tech companies are now scrambling to reduce the risk of false citations and catch citations that slip by before they land in front of a judge.

Two of Cozen O'Connor's attorneys admitted in September that one of them used ChatGPT to create documents, contrary to established policies, and then submitted documents filled with fake lawsuits. A Nevada District Court judge gave the firm the option of removing the lawyers from the case and paying them $2,500 each, or having the former law school dean and legal authorities write a letter explaining the blunder and offering to speak at a seminar on topics such as “professional conduct.”

Both attorneys chose option 2. Mr. Cozen also fired a lawyer who used ChatGPT.

At the beginning of this year, Damien Charlotin, a legal data analyst and consultant, started tracking. A case where a court found psychedelic content in a legal document. Charlotin tallied 120 cases between April 2023 and May 2025. By December, his number of infections had reached 660, and the rate of new infections had accelerated to four to five per day.

Charlotin said the number of documented cases remains small compared to the total amount of legal applications. Most cases in his database involved self-styled litigators or lawyers from small or private firms. When large corporations were involved, illusions often slipped in through the work of junior staff, paralegals, experts and consultants, or through processes such as footnote formatting, Charlotin said.

Hallucinatory content is causing headaches in other professions as well. Consulting firm Deloitte agreed in October to pay the Australian government a partial refund of $290,000 for its report after authorities discovered the report was riddled with errors said to have been generated by AI.

Deviation from the walled garden

AI illusions are hard to eliminate because they are built into the way chatbots work. Large-scale language models are trained to predict the most likely next word given the words that come before it.

Michael Dern, a senior vice president at Thomson Reuters who heads the global product team at Westlaw, a legal research service, says model makers can't use zero illusions to answer open-ended questions about the world. However, companies can significantly reduce risk by forcing large language models to cite from specific datasets, such as corpora of case law or papers. The model is still subject to content mismatches and oversights, but large-scale fabrication is possible Much less likely.

Thomson Reuters and LexisNexis are selling customers the promise that an artificial assistant confined to a garden surrounded by vetted materials is safer than a chatbot trained on the open internet. Both companies have spent decades and millions of dollars building deep repositories of case law and other legal content. We recently added AI-powered tools to help attorneys search and cite data. They must now defend their position against services like ChatGPT and Claude that are creeping into the legal field.

LexisNexis is also expanding its outer moat to Harvey, a legal tech startup whose valuation has soared to $8 billion. Harvey partnered with LexisNexis this year to connect one of the world's largest legal databases to Harvey's generation tools.

Harvey is also working with AI model providers such as OpenAI and Anthropic to limit what datasets they can extract from and overlay onto Harvey's own datasets, the spokesperson said. Lawyers can then inspect the logs to show how the answer was obtained and what data was entered there.


The screenshot shows Clearbrief's new citation check reporting feature.

Cozen O'Connor is testing a new Clearbrief feature that allows users to generate a citation check report before passing a draft to a partner or filing it in court.

clear briefs



AI fact checker

Clearbrief creates drafting tools that work for litigators As a Microsoft Word plugin. Jacqueline Schaefer, a former litigation attorney who founded Clearbrief, said her company's product uses natural language processing to detect citations and create links to relevant case law and litigation documents. This tool points out quotes and facts that are fabricated or contain typos. The tool also points out where the underlying sources do not fully support the author's claims.

Cozen O'Connor is testing a new Clearbrief feature that allows users to generate a citation check report before passing a draft to a partner or filing it in court.

Schaefer said partners at larger firms trust junior staffers to check citations instead of reviewing every case themselves. Still, under federal regulations, partners who sign an application are personally responsible for its accuracy.

Part of the clear brief's appeal to Cozen O'Connor is the paper trail. The company is upgrading its knowledge management system, and Bakardjiev envisions that the company might one day store citation check reports alongside drafts and final submissions, creating a chain of custody for all preparations.

If a judge asks what a partner did to prevent a hallucination citation, Bakardjiev said, a partner can show them a report showing who executed the checks and when.

It is highly likely that the legal profession will continue to coexist with hallucinations. The unglamorous part of this solution is training lawyers to treat the chatbot's output as a starting point rather than a finished piece of work. Another answer is to throw more AI at AI.

Any tips? To contact this reporter via email, please specify the following address: mrussell@businessinsider.com Or send a signal at @MeliaRussell.01. Use a personal email address and non-work device. Here's a guide to sharing your information securely.





Source link