Young ICCA – Cleary Gottlieb Debate at Paris Arbitration Week 2023: This house believes the impact of AI on arbitration is a distant pipedream, not an imminent reality

Date:
30 March 202313:00 - 15:00(CEST)
City:
Paris
Venue:
Cleary Gottlieb Steen & Hamilton
Venue address:
12 Rue de Tilsitt, 75008 Paris, France

Post Event Report

Written by Nusaybah Muti

 

On 30 April 2023, Young ICCA and Cleary Gottlieb hosted a debate as part of the Paris Arbitration Week 2023.  The debate was moderated by Maanas Jain (Three Crowns) and featured prominent practitioners discussing the following motion: “This house believes the impact of AI on arbitration is a distant pipedream, not an imminent reality”.  Sophie Nappert (3 Verulam Buildings) and Pratyush Panjwani (Hanotiau & van den Berg) argued in favour of the motion; while Jacob Turner (Fountain Court Chambers) and Claire Morel de Westgaver (Bryan Cave Leighton Paisner) argued against it.

 

Note:  The positions adopted by the speakers during the debate, and summarised below, do not necessarily reflect their personal views and should be construed as such.

 

The Debate

 

Ms Nappert kicked off the debate by referring to excerpts of a material which she later revealed to be completely made up by ChatGPT, an artificial intelligence (“AI”) chatbot coded to predict word associations.  Against this backdrop, she highlighted how the human mind tends to not second guess an algorithm and to accept its output because it is generated by code and by hardware.  According to her, this fact alone explains why AI in international arbitration is (and should be) a distant pipedream.

 

Ms Nappert opined that there is little doubt that the proper ethical partnership between AI and humans has the potential to optimize the arbitral process.  AI can automate repetitive time-consuming work, and thus increase efficiencies and reduce costs.  However, it also raises critical issues that challenge historical notions of due process and fairness.  Concern has been raised about AI’s propensity to increase bias.  It may replicate human error or bias or even introduce new error or bias.  For example, an AI program trained on data that reflects biases that affected past decisions could incorporate those biases in future decision-making.  The danger lies in how AI can give decisions the appearance of objectivity because they are generated by technology.

 

She referred to an entire branch of AI dedicated to developing a so-called “explainable AI” that can be explained to humans.  This is proving difficult, however, including due to the cost of developing such a program.  Nappert concluded her submissions by referring to the 2021 QMUL survey.  Where 5% and 14% of the respondents in the 2018 survey used AI frequently and sometimes, respectively, these figures rose to 13% and 26% in 2021.  Although this is a rise, according to Ms. Nappert, this is far from an overall endorsement of AI.

 

In opposing the motion, Mr Turner began by defining AI as technology that has elements of autonomy which means it can learn, improve, and take decisions independently of its programmers.  He gave examples of how AI is already being applied to multiple aspects of the arbitral process: (1) AI analyses large volumes of documents and identifies key information; (2) past court and arbitral cases are used by AI to predict the likelihood of a certain outcome based on the facts; (3) in arbitral panel selection, AI can assess previous decisions and track record of arbitrators, and can be used as a means of avoiding unconscious bias and promoting diversity by focusing on key metrics rather than personal connections; (4) AI assists lawyers in managing scheduling, filing documents, and communicating with each other; (5) voice to text technology provides live transcripts and even real time translation; and (6) in writing an award, AI programs like ChatGPT are capable of summarising and distilling large amounts of information into a more digestible format.

 

He opined that the last example is more imminent than current, but there is no reason why arbitrators could not use an AI-generated summary of facts and submissions in the same way that they might a version prepared by a tribunal secretary or junior colleague, but at a fraction of the cost.  Although there are still technological, legal, and ethical barriers to this, Turner noted that the motion does not go that far. The motion refers to impact on arbitration generally which covers even low value arbitrations.  He quoted Prof. Richard Susskind in arguing that access to justice will be greatly improved by using AI tools in repetitive low-value disputes rather than making individuals wait longer to have their cases resolved by human case handlers.

 

Mr Panjwani followed Mr Turner by setting out his perspectives on the historical context and AI’s definition.  He described how humanity has historically been sceptical when faced with any technological innovation.  As an example, he referred to the industrial revolution and how it took almost 200 years after discovering steam engine before humanity managed to understand it and commercialise its use.  He argued that likewise, we will need to be sceptical and keep a steady pace before jumping into the bandwagon of AI.

 

As to AI’s definition, Mr Panjwani distinguished between statistical modelling and generative AI.  He referred to the definition given to AI by the Working Group on Data Protection in Telecommunications: “it employs tools that would generate results comparable to human intelligence”.  In contrast, the purpose behind statistical modelling is to infer something about the relationships between various data points thrown into it.  This has been in existence for the past decades and does not qualify as generative AI.  He argued that out of the six examples provided by Mr Turner, only the last example qualifies as generative AI as the rest simply connects one data point to another to make inferences.

 

Mr Panjwani turned to the threats of AI which warrant scepticism.  First, it could potentially be deemed illegal on grounds of copyright infringement and data privacy.  In coming up with something that appears to be novel, AI can use copyrighted techniques or styles.  Second, live transcription allows programs to collect relevant data and information which then poses privacy concerns.  He also highlighted how AI could generate inaccurate and biased outputs if it is trained on incomplete and biased data.  In concluding, he underlined the fundamental responsibility of lawyers to look for the truth and protect it.  Therefore, it is incumbent upon the legal community to look at AI from two steps behind.  Until the use of AI can guarantee nothing but the truth, we should remain sceptical.

 

Arguing against the motion, Ms de Westgaver also referred to the statistics of 2018 and 2021 QMUL Surveys.  She added that for the survey respondents who use AI either rarely or sometimes, the percentage increased from 20% to 50% in three years.  Therefore, she stressed that AI was undoubtedly being used and already a reality in international arbitration as early as 2018.

 

Ms Morel de Westgaver took the opportunity to shed light on the lack of regulations in the use of AI.  She referred to the use of predictive coding, a document review tool by which relevant or responsive documents are identified by an algorithm rather than by a human.  She explained that it is already being used today due to its ability to reduce costs, increase accuracy, and quantify risks.  Parties that use predictive coding and disclose it allows all the participants in the arbitration to find out about its accuracy and risk of error, which is not available with a more traditional document review.

 

Ms Morel de Westgaver argued that the lack of regulations on the use of AI gives rise to concerns related to fairness, tampering of evidence, and the overall lack of transparency.  She opined that the broad powers of a tribunal to issue orders with respect to AI would not suffice if the use of AI itself is not disclosed in the first place.  With respect to predictive coding, it is important for all participants to know that an algorithm is being used to identify responsive documents as cases will ultimately be decided on the basis of these documents. In concluding, she submitted an additional motion: “This House believes that the arbitration community should regulate the use of AI in arbitration as a matter of urgency.”

 

During a round of rebuttal, Mr Panjwani reiterated his position on the difference between statistical modelling and generative AI.  He agreed that predictive coding is already being used, but it is merely a form of statistical modelling such as Microsoft Excel.  On the other hand, Mr Turner clarified that there is a distinction between: (1) traditional computer programs that are logic based; and (2) machine learning programs which are autonomous and could do things that their programmers did not suggest.  Such autonomy could also be displayed in analytic AI which learns its own rules as opposed to being pre-programmed, and this includes predictive coding.

 

Conclusion

 

The debate was an interesting and fruitful exchange of insights among international arbitration practitioners on this very timely topic.  Those who are in favour of the motion placed emphasis on caution given the risks posed by AI.  On the other hand, those who were against the motion focussed on the benefits and actual use of AI in arbitration.  Although there was somewhat a deviation from the main motion revolving around the definition of AI, the participants reached a common ground regarding the need to regulate the use of AI in international arbitration. As reflected by the poll results, the opposition has swayed the audience with 62%-38% voting against the motion.

 

 

Meet the Speakers

Claire Morel

 

Claire Morel de WestgaverClaire Morel is a partner in BCLP’s International Arbitration group. She is qualified in England & Wales and in New York, with a mixed common / civil law background, having completed her legal education in Belgium and in the USA. She practises international arbitration as counsel, as advocate and as arbitrator, and conducts proceedings in both English and French. She has a particular experience of disputes relating to technology, corporate transactions, licenses, cross-border sale or service agreements, as well as disputes involving secrecy, intellectual property and cybersecurity issues. She is a co-founder of the award-winning initiative Mute Off Thursdays and a Board member of the Silicon Valley Arbitration and Mediation Center (SVAMC). She sits on the ICC Taskforce on the use of Information Technology in arbitral proceedings, on the IBA Arbitration Committee Taskforce on Privilege in International Arbitration and on the Advisory Board of CyberArb.

 

Sophie Nappert

 

Sophie NappertSophie Nappert is an arbitrator in independent practice, based in London. She is dual-qualified as an Avocat of the Bar of Quebec, Canada and as a Solicitor of the Supreme Court of England and Wales. Before becoming a full-time arbitrator, she pursued a career as an advocate and was Head of International Arbitration at a global law firm. She is commended as "most highly regarded" and a “leading light” in her field by Who’s Who Legal. Sophie is highly sought-after in complex energy, investment and natural resources disputes. She is a pioneering practitioner at the intersection of arbitration and Legal Tech. In 2019, she completed the University of Oxford’s Saïd Business School Programme on Blockchain Strategy. In 2021, she co-founded ArbTech, a worldwide, online community forum fostering cross-disciplinary dialogue on technology, dispute resolution and the future of justice. In its first year of existence, ArbTech was shortlisted at the 2022 GAR Awards.

 

Jacob Turner

 

Jacob TurnerJacob Turner is a barrister at Fountain Court Chambers. He is the author of Robot Rules: Regulating Artificial Intelligence (2018) and a contributing author to The Law of Artificial Intelligence (2020). He advises governments, regulators and private organisations on legal issues relating to AI. His recent cases have included defending nine banks in a multi-billion dollar dispute where the claimants attempted to use an AI program to prove their case, acting for former Uber drivers sacked by algorithms, and in proceedings in the UK Supreme Court concerning whether an AI program can be named as the inventor in a patent application. Jacob previously worked as an associate at Cleary Gottlieb Steen and Hamilton and before that in the legal department of a country’s Permanent Mission to the UN in New York. Jacob is also a former law clerk to Lord Mance in the UK Supreme Court. He holds degrees from Oxford and Harvard.

 

Praytush Panjwani

 

Pratyush PanjwaniPratyush Panjwani is a lawyer qualified to practice in India and registered as a foreign lawyer in Belgium. Pratyush is currently a Senior Associate at Hanotiau & van den Berg, Brussels, where he works primarily as counsel in annulment and enforcement proceedings arising out of investor-State arbitrations. He also works as tribunal secretary in (ad hoc and institutional) arbitrations spanning across various sectors, including corporate, construction, hospitality, renewable energy, and licensing disputes. Pratyush has published frequently in journals of repute on issues relating to commercial arbitration (such as the arbitrability of disputes) as well as investor-State arbitration (such as the interpretation of bilateral investment treaties under the Vienna Convention on the Law of Treaties). Pratyush obtained an LL.M. from the MIDS, Geneva in September 2016. Prior to that, he worked in a boutique dispute resolution law firm in New Delhi, after graduating from the National Law University, Delhi in 2014.

 

Maanas Jain

 

Maanas JainMaanas, an English-qualified barrister and senior associate in the London office of Three Crowns, has advised, represented, and conducted advocacy for corporations and States in complex, high-value commercial and investment treaty arbitrations in a broad range of sectors (including energy, finance, technology, and infrastructure) under all major arbitration rules. He has extensive experience handling disputes involving States or State entities, as well as cases with an Indian connection. Maanas is a current co-chair of Young ICCA, and is ranked as a “Rising Star” in The Legal 500 UK’s 2023 guide for International Arbitration. He was also recently recognized as one of London’s brightest arbitration stars in Legal Business’ 2022 Disputes Yearbook.

 

 

Events Team:

 

  • Munia El Harti Alonso (Young ICCA Regional Representative)
  • Nikita Kondrashov (Young ICCA Regional Representative)
  • Stefanie Efstathiou (Young ICCA Regional Representative)
  • Paul Kleist (Young ICCA Events Co-Director)
  • Saemee Kim (Young ICCA Events Co-Director)
  • Maanas Jain (Young ICCA Co-Chair)
  • Maria Athanasiou (Young ICCA Co-Chair)
  • Shirin Gurdova (Young ICCA Co-Chair)

Interested in receiving event updates?

Sign up for Young ICCA Membership to receive email updates on all Young ICCA events and workshops.