SJC's generative AI rules for judges and court staff are considered a good first pass

Applications of AI


listen to this article

Simply put

  • SJC issues interim generative AI guidelines for state courts
  • Judges and court staff will be prohibited from using AI for legal research and writing.
  • Only public data can be input into approved AI tools
  • Lawyers call for clearer governance and careful and measured expansion of AI adoption

Lawyers with expertise in artificial intelligence and data security say the Supreme Judicial Court should be applauded for taking what Chief Justice Kimberly S. Budd acknowledged in her recent annual State of the Justice address as a “modest first step” by issuing interim guidelines for the use of generative AI by judges and other court personnel.

But they also hope that as the instrument evolves, courts will increasingly embrace the potential that GenAI has to fill in some of the gaps in the guidelines and help courts function more efficiently and effectively.

The initial guidelines, adopted on November 12, apply to the use of GenAI tools by Massachusetts court judges, clerks, registrars, recorders, employees, law clerks, interns, and contractors.

The guidelines instruct all personnel that, at least for now, GenAI tools cannot be used to assist with legal research, legal document preparation, or other legal work. Instead, the SJC only approved the use of GenAI for “administrative tasks.”

Referring to a separate “Information Classification Policy,” the SJC told court staff that “Level 1” or public information can only be entered into one of three approved GenAI tools: Open AI's ChatGPT, Google Gemini, and Microsoft CoPilot.

The court's IT department also provided instructions to court staff on how to enable settings on these tools to not allow them to retain input or use the information to train learning models.

These guidelines hold you accountable for the accuracy of the information that GenAI generates in response to your requests.

“GenAI should not be used as a sole reference source or relied upon to make final decisions, and information generated by GenAI should always be independently verified,” the policy directs.

The document states that this policy “will be refined and revised as we learn more about generative AI.”

room for growth

The most important word in the first edition of SJC's GenAI guidelines is “interim,” said Boston attorney Colin J. Zick, co-chair of Foley Hogue's privacy and data security practice.

Colin J. ZickColin J. ZickThere is a wealth of expertise available to courts.

— Colin J. Zick, Boston

“They are making the mistake of being cautious in areas where the stakes are particularly high, such as confidentiality, fairness, and public trust,” he said. “Hopefully these are temporary pauses on the way to stronger and more permissive guidance.”

John F. Weaver, head of McLean Middleton's artificial intelligence practice in Woburn, said the guidelines are “a little conservative, but not far-fetched.”

He noted that the guidelines appear to incorporate many of the key terms that lawyers tend to recommend as part of an organization's AI use policy in terms of creating “guardrails.”

That said, the guidelines, at least in their initial form, are not as thorough as Weaver had hoped.

“Generally speaking, the larger and more sophisticated an organization is, the more detailed and thoughtful its AI usage policy and internal guidelines documentation can be expected to be,” he said. “The state court system is quite large and sophisticated.”

John F. WeaverJohn F. WeaverI don't know which side of the fence they want to be on. But if that's the policy, there needs to be some remedial action for violations, and that's not here at all.

— John F. Weaver, Woburn

But Weaver was quick to add that the state court system is also notoriously underfunded and under-resourced.

“So we want to score along the curve here,” he said.

What Weaver expected to see in the document was the names of the members of the “governing group” that created it.

“This is important because court officials don't know exactly who to turn to if they have questions, and they don't know who is overseeing these guidelines,” he said.

An explanation of how the court settled the three approved applications might also have been helpful, Weaver added.

“Maybe it's because the court system is looking at opt-out features and data sharing options and they like them, but there's no explanation. I think this could be helpful to court staff,” he said.

There is also no discussion of how the list of approved generative AI applications will grow beyond the first three.

“Including language that addresses how to add generative AI applications to the list of approved applications will encourage experimentation,” Weaver said. “That's what we want to see our clients do, because if they can take responsibility and have their staff play with these guardrails, then that's fine.” [document] Yes — it might be helpful. ”

Guidelines or policies?

Although the document is labeled “guidelines,” it is more like a policy for court officials to follow, Weaver said. Some clients prefer documents that are more advisory in nature, he said, but the court system seems to be the type of organization that wants something more restrictive.

“I don’t know which side of the fence they want to be on,” Weaver said. “But if that's the policy, there needs to be some remedial action for violations, and there just isn't that.”

To the extent that courts want to promote a conservative approach to the use of GenAI, he suggested that the consequences for violating guidelines and policies could help reinforce that idea.

Anthony T. Panebianco, a Boston lawyer who serves on Davis Malm's AI policy committee, said it is “probably wise” for courts to start with guidelines rather than policy.

He confirmed in the document that SJC is addressing some of the same issues his company has been considering internally.

“We're thinking about where we can do that in our organization, whether it's in the Massachusetts courts or our own firm or whatever the organization is. [GenAI] “You need to implement what has the most reliability and the least liability, and where can you use it to improve efficiency? And perhaps the highest level of that mixture exists in the management field,” he said.

next step

As for how he would like the document to evolve, Panebianco said he would like to see language added that reflects an interest in proactively understanding how technology is evolving and how tools that have yet to be developed can help the court's work.

A second proposed addition would be to formalize the concept of reviewing how court officials use GenAI, he said, noting that his office uses surveys and surveys for that purpose.

Mr. Weaver agreed that it would be helpful for the guidelines to include a reference to a mechanism or rubric for analysis to identify whether the tools court personnel are using have proven to be effective. Such mechanisms could include requiring employees to record when they use generative AI and to track their use.

In Zick's view, courts have room to make greater use of AI “without compromising core values.” Not all GenAIs are created equal, he noted.

“There are meaningful distinctions between public AI and private contractually protected instances, between sealed case data and anonymized material, and between substantive legal analysis and low-risk administrative assistance,” he said. “Treatment as if each category carries the same risk can over-protect information that is already publicly available and underutilize secure, well-maintained tools that have the potential to increase efficiency and improve the user experience for court personnel and litigants.”

Zick said the SJC wants to create a “sandbox” for pilots at the trial and appellate levels as it plots its path forward. He also believes the SJC could set vendor requirements for authorization tools with features such as data non-retention, training, encryption, access control, and use for audit logging. Zick also said the court could clarify “safe lanes” for public and anonymized use, which would help turn vigilance into learning.

“These are governance choices rather than technology endorsements, and they can be designed to reflect the judiciary's unique obligations,” he said, adding that members of the Bar can and should be involved in developing the final guidelines.

“There's a lot of expertise that courts can tap into,” he says.

He stressed that he was in no way suggesting that GenAI should replace judicial decision-making or that sensitive case information should be uploaded to public AI services.

But given that lawyers are already using AI in court filings, and in some high-profile cases, abusing it, Zick believes it's time to let AI review the accuracy of filings, “with humans involved, of course.”

Zick said he used AI to create an initial draft of the proposed findings of fact after the bench trial.

“What if, along the way to making findings of fact, the judge could consider the complaint, the answer, the motion for disposition, the court record, exhibits, and the parties' own proposed findings?” he said. “The speed and accuracy of judicial decisions in this and many other areas could see revolutionary improvements.”

SJC's first steps may be modest, Zick suggested, but they could be the start of something big.

“The opportunity now is to move from 'not now,' to 'not under these circumstances, only under these circumstances,' and create a managed, transparent, and auditable pathway that aligns innovation with the court's constitutional commitments,” he said. “That evolution will serve the bench, the bar, and most importantly, the public we all collectively serve.”

A court spokeswoman was not available to respond to questions incorporating some of the lawyers' criticisms before Lawyers Weekly's post-holiday deadline.



Source link