Article originally published Computer and Law Society April 22, 2026.
The Lewis Silkin Dispute Resolution team compiles the latest guidance from arbitral institutions on the use of AI
Artificial intelligence is already transforming the way the legal profession works, including research, data analysis, and document creation through platforms (such as Harvey, used by Lewis Silkin). However, as Mrs Victoria Sharp observed, in Ayinde v London Borough of Haringey [2025] EWHC 1383 (Administrator)AI”There are not only opportunities but also risks.” and “Therefore, it must be done under an appropriate degree of supervisionThis article examines AI guidance issued by arbitral institutions and considers new developments regarding AI governance in adversarial litigation.
AI in arbitration: uses and risks
While there was an announcement by American Arbitration Association®-International Center for Dispute Resolution® (AAA-ICDR)) While the introduction of AI arbitrators to adjudicate document-only construction cases in September 2025 heralded an important and courageous step in the use of AI in arbitration, human verification remains a core element of the AI-driven process.
Other arbitral institutions have been more cautious about implementing this technology.
Current conservative AI applications in arbitration include legal research, document review, drafting submissions, and outcome prediction. Arbitrators may also seek to deploy AI to process information more efficiently, such as summarizing evidence or analyzing large amounts of documents. Importantly, however, all of these processes are human-driven.
The use of AI is not without risks, the main one being the phenomenon of “hallucinations”. In this phenomenon, AI fabricates case citations or legal propositions that appear to be true, but are actually fabricated. Other concerns include confidentiality leaks (particularly when sensitive data is input into publicly available AI tools), potential algorithmic biases embedded in training data, and “black box” problems, or the difficulty of understanding how an AI system reaches a particular conclusion.
The key is to understand these risks so that they can be mitigated through human input and checks while maximizing the benefits of AI for all parties.
Guidance from arbitration institutions
A growing number of institutions are issuing guidance on the use of AI in arbitration. Although the specific provisions vary, a consistent message emerges. Although the use of AI is permitted, human responsibility, oversight, and verification are essential, and decision-making should not be delegated. This closely mirrors the approach taken by national courts such as England, Wales and Singapore.
While other institutions such as the LCIA, SIAC, HKIAC, and ICC are understandably taking their time assessing the situation, the published approaches (summarized below) can inform practitioners and arbitrators alike on how to take a safe and prudent approach to deploying AI in real-world problems under all rules.
Chartered Institute of Arbitrators (CIArb)
CIArb announced that Guidelines for the use of AI in arbitration Updates were published in March 2025 and subsequently in September 2025 to address issues that participants in arbitration proceedings should keep in mind when considering the use of AI. The guidelines begin by highlighting the benefits and risks of the use of AI in arbitration and then provide general recommendations, including that parties and arbitrators make reasonable inquiries about AI tools before use, weigh the benefits and risks, consider applicable laws and regulations, and maintain accountability. It further notes that arbitrators have the power to issue instructions regarding the use of AI and may require disclosure if the use of AI could affect proceedings. Discussion about the use of AI is encouraged.
The guidelines recognize that arbitrators have discretion over whether to use AI tools to enhance the arbitration process, including efficiency and quality of decision-making. Importantly, the guidelines caution against the use of AI in ways that could undermine the integrity of proceedings or the enforcement of awards, noting that: “Arbitrators should not relinquish decision-making authority to AI, which can be used to support more accurate and efficient processing of submitted information, ensuring independent judgment at all times.” Individual responsibility, including independent checks and verification, is also emphasized.
CIArb provides template agreements and procedural instructions on the use of AI in arbitration, which practitioners can incorporate into the framework of arbitration agreements or procedures.
Silicon Valley Arbitration and Mediation Center (SVAMC)
SVAMC issued its first AI-specific arbitration guidelines in April 2024.The development of best practices for the use of AI in international arbitration is still in its infancy, and these guidelines aim to contribute to that effort.The Guidelines provide a principles-based framework for the use of AI tools in arbitration and are intended to assist arbitration participants in navigating the potential applications of AI. The Guidelines will apply to the extent agreed by the parties pursuant to an order of the arbitral tribunal or if the arbitral institution decides to adopt the guidelines.
Guidelines 1, 2, 4, and 5 emphasize that users of AI tools ensure confidentiality, understand the uses, limitations, and risks of the AI tools they use (and techniques to mitigate those limitations and risks), maintain responsibility for the use and output of AI tools, and ensure respect for the integrity of legal proceedings and evidence.
Guideline 3 recognizes that there are no disclosure obligations in the guidelines regarding the use of AI. While disclosure may be appropriate in some circumstances, we recognize that the widespread use and evolving nature of this technology means that setting standards for disclosure of AI use is difficult and may create more problems than it solves.
Importantly, the guidelines make clear that while AI tools can be used to assist arbitrators, they should not delegate final decision-making functions (Guideline 6).
SCC Arbitration Institute (SCCAI)
SCCAI published a guide on the use of AI in October 2024. Its purpose is “Flexible instruction…without imposing specific obligationsThis guidance is short and focuses on the importance of confidentiality and effective human oversight to prevent deterioration in the quality of arbitral awards. Arbitral tribunals are encouraged to disclose the use of AI in investigating and interpreting facts and law, or in applying the law to facts, and should not delegate decision-making or reasoning.
Vienna International Arbitration Center (VIAC)
In April 2025, VIAC published a memorandum on the use of AI in arbitration proceedings with the aim of facilitating discussions between the parties. This relatively short guidance is set out under six headings and emphasizes the importance of compliance with ethical codes and professional standards when using AI tools, non-delegation of decision-making authority by arbitrators and confidentiality. The report notes that arbitrators have discretion to manage and promote transparency regarding the use of AI in proceedings, including discussing AI in case management meetings, deciding whether to disclose their own use of AI, and reaching agreements regarding AI with parties. Finally, the arbitrator has discretion as to whether to require disclosure of AI-assisted evidence and how to evaluate its admissibility, relevance, and materiality.
As mentioned above, other arbitral institutions have not issued specific guidance on the use of AI in arbitration proceedings, but this does not mean that this topic is not high on the agenda. For example, the ICC established a Task Force on AI in International Dispute Settlement.Provide guidance and thought leadership to balance the opportunities presented by AI with the need to protect the fundamental principles underlying international dispute resolution from the risks associated with its use.”.
CJC consultation on AI in litigation
It is interesting to compare the approach taken in arbitration with the position developed in litigation. In England and Wales, the Civil Justice Council (CJC) is consulting on whether rules are needed to govern the use of AI when producing court documents. C.J.C. interim reportpublished in February 2026, takes the tentative view that (subject to some proposed limited exceptions) there is no need for formal rules regarding AI-assisted statements of case as long as the court document names the legal representative with professional responsibility. Limited exceptions would be (i) related to the generation of statements of trial witnesses, and it is proposed to introduce rules requiring a declaration that AI is not being used for the purpose of generating the content of such statements; (ii) Expert evidence – It is proposed that experts explain how the substantive use of AI has taken place and what tools have been used.
The consultation will conclude in April 2026, after which the final report will be published. So far, the proposals represent a fairly “hands-off” approach, but go further than previously published arbitration guidelines by proposing specific rules for the use of AI in trial witness statements and expert evidence.
comment
The emergence of AI-specific guidance from arbitral institutions in a relatively short period of time shows that the profession recognizes both the transformative potential and risks associated with AI in arbitration. Despite differences in form and detail, a clear consensus has emerged. While AI can be deployed to improve efficiency and support the arbitration process, human responsibility, oversight, and verification remain paramount, and decision-making should not be delegated to AI systems.
Institutional guidance now provides practitioners and arbitrators with flexibility in how they approach the use of AI. In litigation, the CJC’s consultation on AI in court proceedings in England and Wales suggests that more prescriptive and specific rules may be considered, particularly in areas where the risks are most acute, such as the preparation of witness statements and expert evidence. Arbitration practitioners should pay attention to developments in national courts as these may influence the approach taken by arbitrators.
The consensual and flexible nature of arbitration may make it advantageous to continue to rely on guidance and agreement of the parties rather than binding rules. Nevertheless, as the capabilities and uses of AI evolve, arbitral institutions and arbitration practitioners must remain vigilant to ensure that the pursuit of efficiency does not compromise the integrity of proceedings and the quality of awards. A balance between adopting innovation and maintaining appropriate safeguards is key.
