Most lawyers know that their main task is communication. Describe the issue of the matter and ensure that the client, court or regulatory authority arrives at an informed decision or the dispute ends in its favour.
So why does the term “artificial intelligence” lean, freeze, or rely on undefined terminology by some communicators? Attorneys who advise clients when obtaining AI tools as evidence of litigation or providing their own output should understand technology and know what to ask for to advise.
This fundamental element of the lawyer's job is not just a business problem. It is at the heart of the lawyer's obligation to provide appropriate representation. It has been 12 years since the American Bar Association updated Model Rule 1, and it concerns its ability to state that lawyers “will not be delayed by changes in law and their practice, including benefits and risks associated with related technologies.”
Counseling on AI-related issues does not require a computer science degree or coding experience. They need to know and be familiar with the right questions to explain a particular term, namely rendering legitimate advice on behalf of the client, or to explain it.
This simple framework, or explanatory AI, is important to establish credibility for stakeholders whose use and output is dependent on its stakeholders. Legal tolerance for this tool must be reliable. This requires explanability to establish reliability.
What is the purpose of the AI tool design? In other words, what should the tool do or should it do? What is the purpose of the business? These questions can help you determine whether an AI tool is right for your job.
Is the tool really AI or machine learning? AI mimics the intelligence of a human who acts alone whether or not they were once prompted according to that algorithm (i.e., operational rules), while machine learning is using algorithms to teach machines how to perform very specific tasks.
Which types of AI are involved? Is the tool a prediction AI that analyzes a defined dataset, and its output could be a trend in terms of cost prediction, optimal shipping routes, etc.? Or is it a generator AI that creates text, art, or code based on training data? Hybrid AI tools also exist, which generate reports based on analysis from predicted AI findings.
What dataset was used to train the AI tools? Low data entry, including bias, can impair reliability and pose a risk of discrimination. You can also create the risk of intellectual property claims if your data is protected by copyright, trademark, trade secret or patent, if your data fails to obtain a proper license, if your data is protected.
How is the quality of the output of the tool monitored? Generation AI is famous for making mistakes and creating non-true “facts” known as “hastisation.” To ensure reliability, a defined quality assurance or control process is required, and clients need to advise clients that this requires personnel and budget for the business plan to acquire and deploy AI tools. A review of the output is essential to establish reliability as evidence.
Litigators face similar issues, but there are additional hurdles to the acceptability of evidence. The US District Court is based on the Nevada District District Court based on an assessment of AI evidence on the National Standards and Technology AI Risk Management Framework. This framework is comparable to reliability and reliability, requiring explanability, accuracy (or validity), security, and bias mitigation.
Considerations for AI evidence include whether challenges to AI evidence include requests from challengers of the algorithm's unique information (such as trade secrets) and whether training and testing of the tool or its source code.
In that case, the attorney must request a strong confidentiality or protection order signed by the court. Some clients are still skittish about revealing this information. This could be a factor in the settlement negotiations.
Another consideration is whether the evidence can withstand the challenge of reliability Daubertv. MerrellDowwe consider error rates, general acceptance, testability, and peer review.
Advocates should address concerns that new technologies such as generator AI could suffer from these standards and construct a tolerance debate. Most AIs have not faced such scrutiny, particularly peer review. This is when explainable AI and effective communication are very important. Advocates must clarify the purpose, function, and relevance of the tool.
The aid of this navigation to explain and discuss AI tools or AI-based or generated evidence will evolve with changes in related tools and technologies. As the landscape changes, understanding these fundamentals is important to effectively advise clients and meet the challenges of AI-related legal issues.
This article is based on Bloomberg Industry Group, Inc, publisher of Bloomberg Law and Bloomberg Tax. Or it does not necessarily reflect the opinions of its owner.
Author information
Kenneth N. Rashbaum is a Burton partner and focuses on privacy, cybersecurity and electronic discovery.
Lani E. Medina is a senior associate of Burton's corporate practices.
Written for us: Author's Guidelines
