Use Agentic AI judiciously for maximum benefit in legal operations

Applications of AI


Most in-house lawyers are familiar with predictive artificial intelligence and generative AI. However, agent AI represents the next evolutionary step in AI technology.

Unlike generative AI, which is typically limited to chat interfaces, agent systems can autonomously create plans, acquire data, use tools, and perform tasks across integrated applications.

Navigating the agent AI landscape can be difficult. The term may evoke apocalyptic images from science fiction, where humans are conquered by robots with catastrophic consequences.

The overuse and misuse of agent AI in marketing materials further complicates matters. Many tools branded as “agents” are highly automated systems that lack true autonomy. This can cause confusion for legal teams trying to accurately assess risk and can delay or hinder implementation.

The current debate around agent AI is similar to the general sentiment around generative AI in early 2023: mistrust, skepticism, and outright bans. But legal departments, constantly constrained by limited resources and challenged to do more with less, can greatly benefit from a thoughtful automation strategy.

More than 30% of U.S. legal professionals are currently using generative AI to support their work, and even more respondents at law firms with 50 or more lawyers have adopted it, but this technology has its limitations. According to a study conducted earlier this year, nearly 80% of companies reported that implementing generative AI has not had a significant impact on their bottom line.

But beyond document creation, Agentic AI has the potential to deliver a return on investment for legal operations by autonomously executing multi-step workflows such as contract reviews, compliance checks, and case management. Agentic AI can build on existing capabilities to drive efficiency gains that generative AI alone cannot achieve.

To take full advantage of these benefits, legal departments should consider building, purchasing, or partnering with an agent AI system. Agentic AI not only creates content but also performs autonomous actions in the real world, which poses distinct legal risks when compared to generative AI.

This raises questions about liability and contract formation beyond traditional concerns about intellectual property, privacy, and hallucinations. These risks include new regulatory compliance obligations related to automated and consequential decision-making, new data security vulnerabilities, and potential liability for autonomous actions due to unauthorized, unexpected, or insufficient oversight.

That creates conflict for the legal department. The staff and budget constraints that make agent-based AI assistance most valuable also limit the ability to properly understand the technology and experiment with the right tools. The answer lies in understanding the scope of autonomy and asking the right questions, so risk assessments are based on the actual capabilities of these tools rather than their labels.

To assess the potential risks associated with agent AI systems, legal teams should base their assessment on two fundamental questions.

How much autonomy do agents have? Does it simply suggest actions, or can multi-step workflows be executed across integrated systems without human approval?

How much control can humans maintain? Are there guardrails, override mechanisms, and audit trails? Can humans intervene before irreversible behavior occurs?

These questions allow legal teams to map the system's position in the realm of autonomy. This ranges from assisted AI, which is characterized by low autonomy and high human control, to fully agentic AI, which is characterized by high autonomy and minimal human oversight.

As the autonomy of agent AI systems increases, particularly their ability to perform actions without human intervention, the scale and scope of potential negative impacts increases disproportionately. A system that simply recommends contract terms minimizes risk. Those who autonomously enforce binding contracts bear significant responsibility.

By understanding where a tool falls on this spectrum, legal departments can adjust risk management strategies, allocate resources more effectively, and avoid both over-regulating low-risk tools (an organization's AI-powered email spam filter) and under-representing high-risk tools (an organization's AI interview bot).

This mapping is also important for AI risk assessment. AI risk assessments form a core component of responsible AI governance programs and are increasingly mandated under growing data privacy and AI-specific legislation.

By anchoring the assessment to the scope of autonomy and asking fundamental questions about agent autonomy and human control, legal teams can make informed decisions and balance innovation with risk management.

The future of legal work does not mean avoiding these technologies, but rather carefully integrating them where they can provide the most value.

This article does not necessarily reflect the opinion of Bloomberg Law, Bloomberg Tax, Bloomberg Government, publisher Bloomberg Industry Group, Inc., or its owners.

Author information

Goli Mahdavi is a partner at Bryan Cave Leighton Paisner, a founding member of the firm's AI working group, and co-leader of the AI ​​service line.

Please write to us: Author guidelines



Source link