A new Northwestern University study that surveyed federal judges across the United States about the use and prospects of artificial intelligence in and out of the courtroom found that more than 60% of responding judges reported using at least one AI tool in their judicial work. Although judges reported widespread adoption of AI tools, only 22.4% of judges reported using AI tools weekly or daily.
A research team led by Daniel Rinna, director of the Law and Technology Initiative and senior lecturer at Northwestern Pritzker Law, and VS Subramanian, Walter P. Murphy Professor of Computer Science in the McCormick School of Engineering and director of the Northwestern Security & AI Lab, conducted a stratified random sample survey of bankruptcies, judges, district courts, and appellate court judges. The Qualtrics survey asked participants about their current use of AI tools, examples of their use in the judiciary, and their outlook on the potential impact of AI on the judiciary.
“To our knowledge, this is the first study based on a random sample of federal judges’ use of AI,” Rinna said. “The advantage of a random sample is that, although the study has limitations, it provides a good basis for extrapolating the findings to the entire population of federal judges.”
The study was published by the Sedona Conference, with the New York City Bar Association as a co-publisher.
“While some judges are cautious, there are many who believe that AI creates opportunities to improve access to court, access to justice, and the quality of judicial decisions, but it requires intentionality,” Rinna said. “We need to think about how we bring these technologies into the courtroom, provide AI training to judges, and analyze the benefits and risks.”
numbers
The study population consisted of active federal judges serving as of August 2025. The stratified random sample consisted of 92 bankruptcy judges, 177 magistrate judges, 182 district court judges, and 51 appellate court judges, for a total of 502 judges selected for the study. This list was compiled using Ballotpedia, the Federal Judicial Yearbook, and the Federal Judicial Center Directory. Researchers collected 112 responses from December 2 to 19, 2025.
The survey asked about the following large-scale language models: ChatGPT (OpenAI), Claude (Anthropic), Copilot (Microsoft), Gemini (Google), Grok (X.ai), and Perplexity. Also included were the following “AI for law” tools: CoCounsel (Thomson Reuters), Westlaw AI-Assisted or Deep Research (Thomson Reuters), Protégé or Lexis+ AI (LexisNexis), Vincent AI (vLex), Harvey and Legora. Of the 112 judges who responded to the survey, more than 60% reported using at least one of these AI tools in their judicial work, while approximately 38% had never used any of the tools listed in their work. Nearly one in four judges (22.4%) reported using AI tools weekly or daily.
“AI has many potential applications for knowledge work,” Subrahmanian says. “Our research shows that a significant number of federal judges are already using AI tools.”
Judges are more likely to use “AI for law” tools than general-purpose AI platforms. Researchers found that judges primarily use AI tools to conduct legal research (30%) and review documents (15.5%). As for others in the chamber, the justices reported using AI tools primarily to conduct legal research (39.8%) and review documents (16.7%).
Rinna said there is a correlation between personal and professional uses of AI.
“If judges are using AI in their personal lives, they are more likely to use AI in their professional lives,” he said. Overall, the study found that 38% of judges use AI daily or weekly outside of work. Most judges reported that they rarely (26.9%) or never (25.9%) use AI outside of work.
