The new legal ethics position on the use of generative AI in legal practice makes one point very clear: lawyers must maintain competence across all technological means relevant to their practice, and this includes the use of generative AI.
The opinion was issued jointly by the Pennsylvania State Bar and the Philadelphia Bar Association to educate lawyers about the benefits and pitfalls of using generative AI and to provide ethical guidelines.
While the opinion focuses on AI, it reiterates that lawyers’ ethical obligations surrounding this emerging technology are no different than those regarding any technology.
Related: Is Gen AI creating a divide between haves and have-nots among law firms?
“Lawyers must be as proficient in using technological tools as they are in employing more traditional methods,” the opinion states. “Whether it's knowing how to navigate legal research databases, use electronic discovery software, use a smartphone, use email, or protect client information in digital form, lawyers are expected to maintain proficiency across all technological avenues relevant to their work.”
That said, the opinion acknowledges that generative AI poses unique problems not previously seen in legal technology. Most important is its ability to generate text and to hallucinate in the process of generating text. The opinion states that the technology's text-generating capabilities “break new ground for ethical guidelines.”
“Rather than focusing on whether a lawyer's choice of a particular legal argument is meritorious, some lawyers are using generative AI platforms without checking the citations or legal arguments,” the opinion explains. “In essence, the AI tool gives the lawyer exactly what he or she asked for, and the lawyer does not perform due diligence on the results after getting a positive result.”
The opinion also touches on the possibility that AI may be biased, pointing out that “AI is not a blank slate free of prejudice or preconceived ideas.”
“These biases can lead to discrimination and favor certain groups or perspectives over others, and can manifest in areas such as facial recognition and hiring decisions,” the opinion said.
Taking these issues into account, the opinion states that lawyers have a duty to communicate with their clients about the use of AI technology in their work. The opinion advises that in some cases, lawyers should obtain client consent before using certain AI tools.
12 Responsibility
The 16-page opinion is a concise primer on the use of generative AI in legal practice, and also includes a brief background on the technology and an overview of ethics opinions from other states.
But most importantly, it concludes with 12 responsibilities relevant to lawyers who use generative AI.
- Truthful and Accurate: The opinion cautions that lawyers must ensure that AI-generated content, such as legal documents and advice, is true and accurate and based on sound legal reasoning, and must adhere to the principles of honesty and integrity in their professional conduct.
- Check all citations and cited material for accuracy. Lawyers must ensure that the citations they use in legal documents and arguments are accurate and relevant, which includes ensuring that the citations accurately reflect the content they refer to.
- Ensuring capacity: Lawyers must be familiar with the use of AI technology.
- Maintain confidentiality: Lawyers must protect information related to their client's representation and ensure that AI systems that handle sensitive data adhere to strict confidentiality measures and do not share sensitive data with others not protected by attorney-client privilege.
- Identifying conflicts of interest: According to the opinion, lawyers must be vigilant in identifying and addressing potential conflicts of interest that arise from the use of AI systems.
- Client communication: Lawyers should communicate with their clients about the use of AI in their work, providing a clear and transparent explanation of how such tools will be used and the potential impact on the outcome of the case. Where necessary, lawyers should obtain client consent before using a particular AI tool.
- Make sure the information is unbiased and accurate. Lawyers need to ensure that the data used to train AI models is accurate, unbiased, and ethically provided to prevent bias or inaccuracies from remaining in the AI-generated content.
- Ensuring AI is used appropriately: Lawyers need to be vigilant against the misuse of AI-generated content to ensure it is not used to deceive or manipulate legal processes, evidence, or outcomes.
- Uphold ethical standards: Lawyers must stay informed about relevant regulations and guidelines governing the use of AI in legal practice to ensure compliance with legal and ethical standards.
- Exercise professional judgment: Lawyers will need to use their professional judgment in conjunction with AI-generated content and should recognize that AI is a tool to supplement, but not replace, legal expertise and analysis.
- Use the appropriate billing method. AI has enormous time-saving capabilities, so lawyers need to ensure that AI-related costs are reasonable and properly disclosed to clients.
- Maintain transparency: Lawyers should be transparent with clients, colleagues and courts about their use of AI tools in their legal practice, including disclosing any limitations or uncertainties related to AI-generated content.
My advice: Don't be stupid.
In my years of writing about legal technology and legal ethics, I've developed my own set of shortcuts to stay out of trouble. Don't be stupid..
For example, it would be foolish to ask ChatGPT to find examples to support your claims and then submit them to court without even reading or shepardizing them.
For example, it would be foolish to ask a generative AI tool to create a court filing or an email to a client and then send it off without editing it.
In a joint opinion, the Pennsylvania and Philadelphia ethics commissions put the “don't do anything stupid” guideline in more polite terms, warning that generative AI tools must be used by lawyers who understand the risks and benefits.
“These tools should be used with caution and should prompt attorneys to carefully review the 'work product' produced by these tools. These tools are not a substitute for personal review of cases, statutes, and other legislative materials.”
The full opinion can be found here: Joint Formal Opinion 2024-200.
