Use of generative AI in litigation requires caution and oversight

Applications of AI

What appears to be the beginning of this trend is the recent release of new mandatory rules on the use of artificial intelligence (AI) in legal conferences by U.S. District Court for the Northern District of Texas Judge Brantley Starr.[1] Known as the “Mandatory Accreditation for Generative Artificial Intelligence” rule, the directive states:[a]all lawyers. . . When appearing in court, ensure that no part of your submission is drafted by a generative AI (ChatGPT, Harvey.AI, Google Bard, etc.) or that the language drafted by a generative AI is checked. You will need to submit a document to prove it. For greater accuracy, we use print reporters and traditional legal databases, with humans in charge. ”[2] Similarly, Judge Gabriel A. Fuentes of the United States District Court for the Northern District of Illinois recently ruled that “any party using generative AI tools to prepare court filings must disclose in the filing.” Adopted a standing order to provide for: AI was used,” and disclosed specific AI tools and how they were used.[3]

Generative AI and machine learning

These new requirements clearly recognize the usefulness of AI in legal investigations, drafting memorandums, summarizing briefs and other filings, preparing and responding to discovery requests, and anticipating questions in oral arguments. But these are responses to problems specific to the current situation. Generate AI’s. For example, in what the court called “unprecedented circumstances,” Mata vs Avianca AirlinesA legal brief filed by Roberto Mata’s lawyers was found to contain a “fake judicial ruling.”[4] This example illustrates some concerns that may arise from an untested reliance on AI-generated content, such as the “hallucinations” (or fabrications) and potential bias tendencies of generative AI. Judges Starr and Fuentes’ orders are intended to balance the potential for many beneficial uses of these platforms while curbing the potential for abuse in preparing legal briefs.

As concerns continue to surface over the use of generative AI in the legal community, here are some initial considerations:

What is Generative AI?

Not all AI or machine learning (ML) is generative AI. Broadly speaking, AI is problem-solving software. One subset of AI is ML, distinguished by computer systems that can learn and adapt without following explicit instructions. One such type of ML is deep learning, which aims to parallelize neural network processing and algorithms in the human brain. As the name suggests, deep learning algorithms perform a task iteratively, each time “learning” to fine-tune and improve their results. Finally, generative AI can be thought of as a subcategory of deep learning that uses deep ML processes to create new original content such as images, videos, text, and audio.

Generative AI can generate text, such as legal arguments and research, by predicting which text should follow certain inputs based on patterns learned from large amounts of data. This ability to generate text based on large datasets makes generative AI a powerful tool in many areas, including the legal profession. Some generative AI tools are based on a “closed” information world, while others are “open,” with extensive access to data through web plugins and other connections to the internet. .

These tools are very useful, but they also come with risks.

As noted above, Judge Starr’s main concern about the use of generative AI in the legal profession centers around the tendency and bias of some types of generative AI to “hallucinate,” or create fictitious information. is expanded to

It is important to remember that generative AI models are trained on vast amounts of data and can produce highly realistic and relevant responses. However, plausible but factually correct, tools incorporating these models are designed in some respects to produce similar, if not identical, output to the source information used by the tools. may produce output that is not necessarily surely, Mata In this case, the AI ​​tool repeatedly confirmed with Mata’s attorney that the case and legal citations cited were genuine and could be found in reliable legal databases. Despite such assurances, Mata’s attorneys now face potential court sanctions for relying on non-existent lawsuits served by AI tools.Current orders of Justice Fuentes, with particular reference Mata“One way to jeopardize the mission of federal courts is to use AI tools to generate legal investigations containing ‘false judicial rulings’ cited against substantive legal propositions.” ” he points out.

In a new mandatory certification directive on generative artificial intelligence, Judge Star emphasized that AI has no legal or ethical obligations to clients or the rule of law. Anyone using generative AI for legal purposes should be aware that because the AI ​​learns from existing data, biases within that data may be reproduced in her AI’s output. Beyond generated content, the use of AI poses potential risks to client confidentiality and data privacy. If a person using such tools submits sensitive client information to certain generating AI applications, that data may be stored indefinitely and used to generate responses. other users.

How Should Lawyers Leverage Generative AI?

Generative AI holds great promise, but should be used with caution and oversight in legal practice. Attorneys using generative AI should:

  • Always cross-validate data provided by AI tools. For example, cross-reference AI-generated information with traditional legal databases or solicit expert human opinions.
  • Strive for a balanced integration of AI within your workflows so that it complements rather than replaces your skills.
  • Avoid sending sensitive data (including attorney-client privilege information and client confidential information) to the generating AI application. unless appropriate data security measures and contractual terms are in place to prevent the use of such information for AI training and loss of privilege or confidentiality.
  • Stay up to date on the latest AI applications in the field and their potential pitfalls.

important point

The usefulness of generative AI in tasks such as assisting legal investigations, suggesting potential questions for depositions and oral arguments cannot be ignored, but its use requires appropriate caution. This is not only due to the complexity and critical nature of litigation work, but especially when using “open” generative AI to generate content that appears to be existing case law (in fact it is not). or disclosure of privileged or confidential customer information. tool.

While Judge Starr’s order and Judge Fuentes’ enforcement order are the first of their kind, more rules governing the use of generative AI in legal proceedings are likely to emerge. Continuous awareness of new rules is an important first step.

The authors would like to thank Summer Associate Emma Donnelly for her contributions to this update..


[1] Other Order No. 2 (ND Tex. 2023) for Mandatory Certification on Generative Artificial Intelligence.

[2] identification.

[3] Civil Procedure Permanent Order Against Judge Fuentes at 2:00 (ND, Illinois, May 31, 2023) (Requires Disclosure of Use of Generative AI, Including Tools and How to Use Them, and Warns of Reliance on AI Tools) Federal Civil Litigation It may not constitute a reasonable investigation under the Rules of Procedure 11).

[4] Instructions that indicate the cause with 1. Roberto Mata vs Avianca AirlinesNo. 1:22-cv-01461-PKC (SDNY, 4 May 2023), ECF No. 31 (Castel, J.).

© 2023 Perkins Coie LLP

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *