Published guidance on AI utilization for judicial positions

Applications of AI


HM Courts and Tribunals Service has published updated guidance for judicial holders on the use of tools that use artificial intelligence. The guidance replaces a document previously published in April 2025 and reminds officials of the principle of personal responsibility for any material produced in their name, said Lord Justice Barth, Chief Justice for Artificial Intelligence.

In recent months, courts have had to admonish individuals for using artificial intelligence tools such as ChatGPT, Google’s Gemini tool, and Apple’s Siri. High profile incidents involving precedents that simply do not exist have been reported and the guidance also warns

“AI chatbots are currently being used by unrepresented litigants. AI chatbots may be the only source of advice and assistance that some litigants receive. Litigants are unlikely to have the skills to independently verify the legal information provided by AI chatbots and may not realize that they are prone to making mistakes.”

Judiciary officials are encouraged to investigate the suspected use of AI, find out what accuracy checks (if any) have been carried out, and “inform litigants that they are responsible for what they submit to the court/tribunal.”

Those using AI are advised to ensure they have a “basic understanding” of AI’s capabilities and potential limitations, noting that public AI chatbots do not provide answers from a trusted database.

“As with other information available on the Internet in general, AI tools can be useful for finding material that you know to be correct but are unavailable, but they are a poor method for conducting research to find new information that cannot be verified. AI tools may be best seen as a way to obtain non-conclusive confirmation about something rather than providing immediately correct facts. The quality of the answers you receive depends on the associated AI, such as the nature of the prompts you enter or the quality of the underlying dataset. It depends on how you interact with the tool. These may include: False information (intentional or otherwise), selective data, or data that is out of date You should be aware that even the best prompts can make the information provided inaccurate, incomplete, misleading, or biased.

The guidance continues that much of the output is derived from publicly available information on the internet. “Their ‘view’ of the law is often based heavily on US law and historical law, although some claim that they can distinguish it from the law of England and Wales. ” It also warns of instances where AI is “hallucinating” and simply fabricating cases and precedents that don’t exist.

Contributors are also cautioned not to inject sensitive information into the AI, as the AI ​​may use sensitive information when responding to similar questions from others, potentially compromising confidentiality. If confidential or personal information is unintentionally disclosed, you should contact your supervising judge and the Judiciary. If the disclosure involves personal information, it must be reported as a data incident.

Comment on the updated guidance and Lord Justice Bath, Presiding Judge for Artificial Intelligencesaid:

“A judicial body’s use of AI must be consistent with its overriding duty to protect the integrity of the administration of justice and uphold the rule of law. I welcome the publication of updated AI guidance that reinforces this principle and the personal responsibility that judicial office holders have for all material produced in their name. I encourage all judicial office holders to read and carefully apply the guidance.”

The latest guidance applies to all judicial officers for whom the Chief Justice and Senior President of the Court is responsible, their clerks, judicial assistants, legal advisers/staff and other support staff, and is available on the HMCTS website.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *