How to ensure that AI is sharp, not legal
I am not a lawyer. However, my journey to studying AI in Russian and business applications suggests that some lawyers can hurt their careers in one of two ways.
- By relying too much on AI, or
- By running without AI support
Thankfully, neither need to be true.
AI saves valuable time for legal professionals in both law firms and in-house teams. This can improve the quality of life in addition to increasing the number of resources available to work.
So, how can AI, in particular modern formats, Generator AI (Genai) and Agent AI be involved in this shift, and how users can balance the right balance between unused and excessive dependencies?

Let's call it ai-work-life balance.
Jump to below
What lawyers can learn about using AI for language learning
The risk of AI becoming crutches
Efficiency combined with critical thinking
Examples of healthy AI work-life balance
What lawyers can learn about using AI for language learning
Learning a new language was overwhelming, and it certainly was for me. Living, studying and working in Russia for several years was a major challenge, but an exciting growth experience. The cold climate and different culture were intimidating, but the Russian language itself was a giant of its own.
The potential threat of pride and embarrassment has limited my progress. Equally harmful was the temptation to overly rely on Russian teachers who spoke fluent English.
Learning a new language and learning how to use AI can trigger similar reactions to lawyers. There are two general responses that are portrayed in two groups.
One group has implicit fear, or an unchanging unwillingness to learn and become proficient in AI. That's completely understandable. Which lawyer has that time? Meanwhile, the second group foolishly relies entirely on AI and can derail the case in court.
Let's start with an encouragement idea and give a brief explanation to the previous group.
The key to learning a new language is the same key as unlocking the potential benefits of AI. This is the process of breaking down large, complicated concepts into smaller bite-sized parts. It is no different from the components of a learning approach.
Russian grammar and case systems are notoriously difficult to learn. All words can be transformed into one of six forms depending on context usage.
It was important to learn one case type for one word at a time in part of the speech.
What we've learned is that it takes time to get there [level of] Proficiency in all advanced technologies, including AI. Since I've never seen a large language model, I can't use it in my workflow with high effectiveness.
General Manager – Thomson Reuter, a large global law firm
So why is this encouraging?
Lawyers can also divide complex legal tasks into small steps and skills that AI can handle. However, certain skills should be left to legal experts, as we conclude.
The risk of AI becoming crutches
Many lawyers are quickly employing AI for legal work. Do occupations and industry ask how much and what use is too much?
The second group's queue. Unfortunately, there is an increasing number of lawyers who misused AI to present false hallucination information. What can we learn from their mistakes?
Some lawyers did not fully understand the limitations and proper use of consumer-grade AI tools. Even reading the manual, there is always an inherent risk of using the tool. Others may assume that the quote filed in the court is realistic or that they did not confirm the quote.
There is a lot of value in AI to brute that method through the problem as a starting point, then be truly thoughtful about how and how human judgment and governance are applied.
Thomson Reuters Chief Technology Officer
Microsoft used different lenses to explore risk. Their researchers looked at survey responses from 319 experts using Genai tools such as ChatGpt and Copilot at least once a week. We also looked into 936 real-world tasks achieved with Genai tools by these experts. The report relied on specific English-speaking demographics and self-reports, but still raises many questions.
These were some important findings:
- Cognitive laziness – AI can cause long-term overdependence and reduced critical thinking.
- Deteriorating Skills – Just as learning mathematics and new languages requires practice and repetition, mastery of tasks involves building essential skills and engaging in repetitive, mundane tasks that sharpen the essential skills.
The study found that “decreased confidence can lead to users relying on AI and potentially degrade engagement and independent problem-solving skills. This reliance on AI can be seen as a form of cognitive offloading in which users rely on AI to perform tasks that they feel are not confident in dealing with themselves.”
The lesson is clear – relying too much on crutches will gradually weaken your critical thinking muscles.
Efficiency combined with critical thinking
AI certainly improves efficiency, but it can also hinder critical thinking and independent judgment. So how can efficiency and critical thinking converge so that lawyers can become efficient critical thinkers?
The first step is honest self-awareness. They need to identify which tasks they trust that AI will handle and which tasks they believe can perform more effectively.
Microsoft researchers emphasized the importance of this identification.
“The genai tool can reduce the cognitive load of knowledge workers by automating a significant portion of the task, but because knowledge workers are confident in performing the task itself, they employ more enthusiastic practices when steering AI responses, particularly when evaluating (apps) and AI responses.”
Lawyers and their teams need to identify when and how critical thinking fits into the workflow and how it is appropriate to invite AI into the process.
Generally, validation of AI-generated content, such as case law citations, is an obvious starting point.
At this point, “What other ways AI can help you with efficiency and critical thinking?”
Examples of healthy AI work-life balance
A healthy AI work-life balance is not about time management. As we saw, AI is a crutch and can erode the cognitive faculty of lawyers.
You can also propel them from one task like a springboard to save time – yes, but even more importantly, it can make their work experience more enjoyable.
Healthy integration within legal workflows means AI can do it.
- Facilitate validation – It is important to emphasize the need to use the most trusted and authoritative resources to validate AI-generated content.
- Raise awareness – highlights potential risks and downstream harms, ask clear questions, and encourages lawyers to critically evaluate their assessments.
- Boost Motivation – Possess critical thinking as a means of long-term skill development and professional growth within the interface.
- Get a customized level of support – lawyers can regulate the scope of autonomous AI assistance based on trust level and task complexity.
That last bullet is where Agent AI will turn Springboards into self-driving cars, or more precisely, an autonomous orchestra conductor.
Attorneys can use AI assistants to take care of common and everyday tasks. First case law investigation, document summary, contract review aspects, and more.
However, the conductor develops a vision, fixes problems, and ensures performance accuracy and quality.
This combination of orchestration and evaluation is a way of having an agent system that takes on increasingly complex tasks that can run longer without human intervention.
Thomson Reuter, Senior Director of AI Partnerships and Strategy
Agent AI goes beyond Genai assistants who require step-by-step guidance and prompts. Agents use a large language model (LLM) that tunes tools in loops. They develop independent strategies, think logically, and perform complex tasks according to predefined goals under human monitoring and management. They are dynamic, learn from mistakes and become maestros, so to speak.
In my Russian class, I was able to ask questions in English and allowed the teacher to make critical thinking efforts for me early on. Instead, I did my best to ask questions in Russian and let her guide me through my thoughts, so I came to my conclusion.
Similarly, it is unwise for young lawyers, especially those who rely on AI for all tasks.
After all, it is a repetitive, repetitive task where a lawyer first develops and masters the most basic skills. But as they acquire more advanced critical thinking skills, lawyers can do more with AI and do more than they don't.
What really makes the current generation of LLMs extraordinary is not what they can do, they enable them.
Product Manager, Coconsell, Thomson Reuter
Human-like conversations with LLM are now relatively easy. Think of an AI agent as an autonomous executive assistant who can work as a part of a team. Imagine how this will enable new relationship dynamics and change the way businesses and legal departments manage.
Ultimately, using AI to achieve a healthier work-life balance is both art and science. Furthermore, the distinction between motivated AI-backed lawyers and self-satisfied AI-dependent lawyers will be important to create as leaders.
The most valuable work is a motivated lawyer with a healthy work-life mindset. This is where critical judgment, strategic thinking, and relationship building belong and thrive under the realm and expertise of human lawyers.

Related Blog
Agent AI and Law: How to Redefine the Occupation
Please read the blog
