Invisible Voices and New Evidentiary Minefield: AI, Deepfakes and Right to be Heard in International Arbitration
Invisible Voices and New Evidentiary Minefield: AI, Deepfakes and Right to be Heard in International Arbitration
Sanjida Sohana is an Assistant Counsel at the Bangladesh International Arbitration Centre (BIAC) and also engaged in legal practice before the District and Sessions Judge Court, Dhaka. Her LinkedIn is https://www.linkedin.com/in/sanjida-s-06b34a1ab
Yash Jain is an Associate at Dr. Vimal Verma & Associates in New Delhi, specializing in Corporate Law and M&A transactions. He also maintains an active litigation practice before the High Court and District Courts of Delhi. His LinkedIn is https://www.linkedin.com/in/iyashjain
Introduction
In the world of international arbitration, we're facing a new kind of threat due to AI and technology, one that strikes at the very heart of the arbitral process by challenging both the authenticity of evidence and the right to be heard. AI tools like deepfakes and large language models introduce strategic and procedural threats in arbitration. Deepfakes cast a doubt on reliable evidence, while summarization and filtering put the right to be heard at jeopardy, undermining the integrity of due process and pursuit of the truth.
Although AI is adopted to reduce costs and save time, necessary safeguards such as the need for expert verification and separate evidentiary hearings on authenticity often increase both. Generative AI can lead to costly disputes and may distort or overlook subtle arguments during automated summarization. Thus, this article advocates for compulsory disclosure and protections to ensure fairness, transparency and uphold the right to be heard.
AI Doubt Effect on Right to be Heard and Evidence Integrity
The growing use of large language models like GPT or BERT in arbitration cases raises two distinct but related concerns related to procedural fairness and the authenticity of evidence. These technologies have the potential to streamline research, drafting, and evidence review, but also introduce risks that can affect both the substance and the process of arbitral decision-making. While efficiency and innovation are welcome, it is important that these developments do not indirectly undermine procedural guarantees such as the right to be heard reflected in Art. 18 of the UNCITRAL Model law and comparable instruments.
From a procedural standpoint, Ai-assisted advocacy could disrupt the equilibrium between parties. If one party has greater access to advanced tools or employs them more effectively, this may affect the equality of arms and thereby the fairness of proceedings. And even when both sides use such tools, unverified and fabricated AI outputs can distort arguments or introduce inaccuracies[i] that impair the tribunal’s understanding of the issues, effectively diminishing a party’s opportunity to making itself be genuinely heard. Conversely, if tribunals develop an inherent distrust of parties’ AI-assisted submissions, they may discount or undervalue arguments, again impairing procedural fairness.
These tensions are already emerging in practice. A 2021 International Arbitration Survey shows users are motivated to use AI for time savings (54%) and cost reduction (44%), yet are concerned about unreported errors and bias (51%) and confidentiality risks (47%). And courts and tribunals have already begun to address the fallout from unregulated use: in the United States, two attorneys were sanctioned for submitting pleadings containing fabricated case citations;[ii] in Ayinde v Haringey and Al-Haroun v Qatar,[iii] unregulated AI use was condemned as “improper, unreasonable, and negligent” professional activity. In arbitration, similar anxieties resurged in LaPaglia v. Valve Corp,[iv] where a party sought to annul an arbitral award, alleging the arbitrator had used ChatGPT to draft the decision.
The absence of a clear framework adds to these problems. Without established rules governing disclosure, admissibility, and verification of AI in the legal work, tribunals must to make ad hoc determinations. This means that, in practice, similarly situated parties may face different unpredictable disclosure and admissibility expectations, raising legitimate concerns about the fairness and transparency of proceedings.
Closely linked is the question of evidentiary authenticity and integrity. AI systems can be used not only to assist in argumentation but also to generate or alter evidentiary materials. These concerns about the possibility of using AI to fabricate or modify evidence has given rise to what has become known as the “AI doubt effect” – a pervasive uncertainty about whether what appears genuine truly is. Current international instruments provide little guidance here. The IBA Rules on the Taking of Evidence in International Arbitration[v] presume the authenticity of e‑documents, even though such presumption can be easily challenged by a plausible deepfake claim.[vii] The UNICITRAL Rules and the New York Convention are likewise silent on evidence authenticity, especially in the context of AI.
Judicial developments illustrate how unsettled this terrain remains. In State of Washington v. Puloka, the Washington court excluded an AI-enhanced video evidence due to lack of reliability, whereas in Huang v. Tesla a California court rejected a challenge grounded on a vague possibility that the video could have been a deepfake.
Ultimately, both procedural and evidentiary risks converge on the same concern: the preservation of fairness and trust in arbitral justice. Whether through unequal access to AI tools, distorted argumentation, or doubts about the authenticity of evidence, the unchecked use of AI can undermine the integrity of the arbitral process. Addressing these issues requires not only technical solutions but also a principled framework that aligns technological innovation with the procedural guarantees at the core of international arbitration.
Evolution of Soft Law in Governing AI Use in Arbitration
Despite the concerns surrounding the use of AI in the legal profession, it is becoming clear that the use of AI in arbitration is inevitable. The challenge, therefore, is not whether to allow the use of AI, but how to regulate and integrate it in ways that preserve the integrity of proceedings. Arbitral institutions are beginning to assume a leadership role in this area by developing and guidance to manage AI’s impact in arbitration.
Some initiatives focus on disputes directly involving AI systems. For instance, JAMS Rules are specifically tailored for disputes where the subject of the dispute is AI technology.[viii] Other institutions have published general soft-law guidance on the responsible use of AI in arbitration. The Vietnam International Arbitration Centre, for example, has issued a non-binding AI Note advocating compliance with soft-law instruments and institutional regulations.[ix] Similarly, Singapore’s Ministry of Law has advocated for responsible AI governance, examining the incorporation of AI in legal case management systems.[x]
Across initiatives, there has been a general emphasis on procedural safeguards and transparency, including the need for adequate disclosure. The Silicon Valley Arbitration & Mediation Center has issued guidelines encouraging all participants in an arbitration to disclose the use of AI tools. Similarly, the Stockholm Chamber of Commerce requires that an arbitral tribunal should be aware of AI usage. The CIArb guidelines on the subject foresee that “arbitrators may require disclosure of the use of an AI Tool”.[xi] The Association of Arbitrators (Southern Africa) has also developed a regional soft-law tool that stated the tribunal should ensure AI tools not compromise integrity of the proceedings.[xii]
Taken together, these developments show a progressive shift from resistance to managed acceptance of AI in arbitration. The emerging consensus is that, rather than excluding AI, institutions must focus on creating procedural mechanisms to ensure that technological efficiency does not come at the expense of fairness and due process.
Conclusion: Strengthening Arbitral Practice in the Age of AI
Technological developments reflect more than just advancements. They pose dual risks in arbitration. For international arbitration to remain a trusted forum for dispute resolution, compulsory disclosure, and procedural safeguards are not merely best practices, they are foundational requirements which must be kept at the centre of any discussion around the use of AI. Summarization tools may oversimplify nuanced arguments, while deepfakes undermine evidentiary authenticity, threatening the right to be heard and challenging fairness, reliability, and participation in arbitral proceedings.
The only way forward is the development of consensus amongst various institutions around the rules governing the use of AI in arbitration, just as this consensus has built around other contemporary issues, such as the permissibility and disclosure requirements for third party funding, with minor variations. For emerging practitioners, this is both a challenge and a calling: to develop the skills and ethical clarity needed to ensure that AI enhances justice rather than compromises it.
[i] Katrina Limond and Alexander Calthrop, ‘Artificial intelligence in arbitration: evidentiary issues and prospects’ (Global Arbitration Review, 09 September 2025) <https://globalarbitrationreview.com/guide/the-guide-evidence-in-international-arbitration/3rd-edition/article/artificial-intelligence-in-arbitration-evidentiary-issues-and-prospects> accessed 21 September 2025.
[ii] Coomer v. Lindell et al. [2025] D. Colo., No. 1:22-cv-01129.
[iii] Ayinde v Haringey [2025] EWHC 1383, No.AC-2024-LON-003062 and Al-Haroun v Qatar [2025] EWHC 1383, No.CL-2024-000435.
[iv] LaPaglia v. Valve Corp [2025] S.D. Cal., 3:25-cv-00833.
[v] International Bar Association, IBA Rules on the Taking of Evidence in International Arbitration (adopted 17 December 2020).
[vi] International Bar Association, IBA Rules on the Taking of Evidence in International Arbitration (adopted 17 December 2020) Art 3(12).
[vii] International Bar Association, IBA Rules on the Taking of Evidence in International Arbitration (adopted 17 December 2020).
[viii] JAMS, Rules Governing Disputes Involving Artificial Intelligence Systems 2024.
[ix] VIAC, Note on the Use of Artificial Intelligence in Arbitration Proceedings 2025.
[x] ‘Welcome Address at the 25th Annual IBA Arbitration Day’ (Ministry of Law, Singapore, 23 February 2024) <https://www.mlaw.gov.sg/news/speeches/2m-welcome-address-25th-annual-iba-arbitration-day/> accessed on 19 July 2025.
[xi] CIArb, Guideline on the Use of AI in Arbitration 2025, Part-III, 4.4.
[xii] Association of Arbitrators (Southern Africa), Guidelines on the Use of AI in Arbitrations and Adjudications 2025, Rule-13.