Old and New Architecture Toronto

Why Machines Should Not be Making Legal Decisions

Torkin Manes LegalWatch
 

Generative artificial intelligence (“AI”) and large language models (“LLMs”) have immense potential to help address the backlog of judicial and administrative law decisions that mark the Canadian legal system. 

With promise, however, comes peril.

As Courts and tribunals come to terms with the role that AI will inevitably play in the administration of justice over the next decade, a series of new guidelines and decisions from the Courts have emerged to address the two main problems of using AI in legal adjudication – inaccuracy and bias.

The Near-Ban on AI Adjudication by the Federal Court

As early as July 15, 2020, the Canadian Federal Court foreshadowed in its Federal Court Strategic Plan 2020-2025 the new role AI would be playing in litigation.

That being said, even then the Court noted that AI was not “being considered to assist with the adjudication of contested disputes”. Rather, AI was forecast as an “additional tool” in the Court’s arsenal, to be used largely in administrative tasks and possibly in aid of alternative dispute resolution mechanisms such as mediation.

With the emergence of prominent US cases in which lawyers inadvertently made inappropriate use of LLMs, resulting in “hallucinatory” Court submissions in which AI generated cases that simply did not exist, Canadian Courts began to take notice.  

As the U.S. Southern District Court of New York openly acknowledged in its 2023 decision, Mata v. Avianca Inc., the capacity for AI to generate “deepfake” Court decisions has serious implications for the rule of law, including:

  • The waste of time and money associated with investigating and uncovering the AI hallucination;
  • The waste of scarce judicial resources;
  • The potential to deprive claimants of “arguments based on authentic judicial precedents”;
  • The harm to the reputation of the Courts and the judiciary; and
  • The promotion of “cynicism about the legal profession and the American judicial system”.

There is little doubt that the sensationalism associated with Mata v. Avianca, and decisions like it, reached the northern border. Similar concerns have been expressed in policies devised by the Government of Canada, such as the Guide on the Use of Generative AI.

On December 20, 2023, the Federal Court of Canada issued what perhaps amounts to the most comprehensive policy governing the use of AI in litigation: Artificial Intelligence – Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence.

This policy was no doubt animated by the concerns expressed in Mata v. Avianca.  

While the Court is open to recognizing the obvious benefits of using AI in the adjudication process, including “analyzing large amounts of raw data, aiding in legal research, and performing administrative tasks”, the Court is also well aware of its risks. 

The effect is essentially a moratorium on the use of AI by the Court, pending further consultation. The policy holds that:

  • The Court “will not use AI, and more specifically automated decision-making tools, in making its judgments and orders, without first engaging in public consultation”;
  • The Court’s approach to AI will be guided by a number of “principles”, including the Court’s accountability to the public in its “decision-making function”, concerns about AI as a tool that will amplify discrimination and bias, and the need for human oversight in all decisions in which AI plays a role; and
  • Where the Court’s use of AI could have “an impact on the profession or the public”, consultation is required before the Court adopts that use.

Ontario Appellate Courts Place the Onus on the Lawyer for Accuracy

The concern that AI can generate “fake” judicial or tribunal decisions has trickled into Ontario’s Rules of Civil Procedure.

In late 2023, the Rules governing appellate submissions in Ontario Courts were amended to include a small, but significant, change.

Under Rules 61.11 and 61.12, both the appellant’s and respondent’s lawyers must now include a certificate in their written submissions to the appellate Court (known as a factum) confirming that the “person signing the certificate is satisfied as to the authenticity of every [legal] authority” listed in the factum.

For the purposes of this certification, the Court assumes that a case published on a government website, by a government printer, on the Canadian Legal Information Institute’s website (CanLII), on the Court’s website, or by a commercial publisher of Court decisions, is presumed to be “authentic”, absent “evidence to the contrary”.  

In other words, if the decision does not derive from a recognized source of jurisprudential authority, the onus is on the appellate lawyer to verify and ensure its accuracy. 

Lawyers who rely on LLMs to assist in the drafting of legal submissions will bear the consequences of AI hallucinations and for providing a false representation to the Court. 

While the sanctions for such misconduct remain unclear, and likely will involve a punitive costs award, there is little doubt that the Rules have now set the stage for serious repercussions on a lawyer who certifies the veracity of the legal authorities on which they have relied in their submissions.

Why Further Tribunal and Court Policies Governing AI Are Needed

Generative AI presents a real opportunity for tribunal and Court decision-makers to reduce the delay and workload currently affecting the administrative State and the judiciary.

At the same time, the inaccuracies and bias inherent in AI adjudication are only beginning to be understood.

Human oversight of the Courts’ and State’s decision-making function remains imperative. 

Without policies governing the limits of AI in the adjudicative process, judges and tribunals face the real risk of employing AI inappropriately or depending on submissions and evidence that may turn out to be fundamentally flawed.  

This risk poses serious challenges to the rule of law and the proper administration of justice.

Marco P. Falco is a partner in the Litigation & Dispute Resolution Group at Torkin Manes who has written extensively on the need to regulate AI in the tribunal decision-making process. For more information on how a tribunal can develop appropriate AI policies and procedural rules, you may contact Marco at mfalco@torkin.com.