AI hallucinations are "symptom not disease" in Canada's courts
In Canada, the court system has seen multiple cases involving both hallucinated citations and factual errors from integrated artificial intelligence.
Cristiano Therrien, a 25-year industry veteran who teaches tech and cyber law at the Université de Montréal and trains law firms on in-firm AI standards, told TechDay that as Canadian legal precedent stands, the onus is on the deployer, the human, to keep things factual.
"Hallucinations are the symptom, not the disease," said Therrien. Adding that the increased use of AI in court cases is setting a precedent within the common-law judicial system.
In this decade's wave of generative AI, there have been multiple defining cases to set precedent, but, as Therrien calls it, the template case involves the British Columbia small-claims case Moffatt v. Air Canada.
In 2024, the British Columbia Civil Resolution Tribunal found that Air Canada negligently misrepresented its bereavement‑fare policy through an AI chatbot on its website and was liable for the resulting damages. The claimant, Jake Moffatt, relied on the chatbot's inaccurate information when booking a flight to attend his grandmother's funeral. Moffatt ended up paying a higher fare than he would have if the bereavement‑fare rules had been correctly explained.
The Tribunal held that Air Canada owed a duty of care to chatbot users, that the advice was negligently misleading, and awarded Moffatt about $650 in damages, plus modest interest and tribunal‑fee recovery
"Air Canada was few dollars in damages, but it was enormous in doctrine," Therrien said. "It was the first time that a Canadian tribunal refused to let a company hide behind its own chatbot. That's a precedent that we decided for 20 years ... you can see here that Canadian courts defending the integrity of the legal system itself against the AI-generated noise."
While AI use in law is seldom regulated as of April 2026, a section of Quebec's civil code was enacted to establish the burden of autonomy, which is proving to provide a legal basis in the age of generative AI.
Article 1465 in the Civil Code of Québec dictates that the custodian of a property is liable for damage caused by the object's autonomous act, unless they prove they committed no fault. It creates a presumption of fault for damages arising from items such as water heaters, but has been cited by legal experts as a possible basis for litigating AI-related cases.
While the European Union's AI Act is more of a top-down model, Therrien says Canada's lack of official governance is shaping the landscape one case at a time. However, a "network model" of AI risk frames threats as interconnected, cascading failures across data, infrastructure, and deployment, rather than isolated incidents seen in top-down models.
"Law societies are volunteer driven, and there is no central enforcement agency coordinating all the moving pieces here. So a model that isn't recognised doesn't get funded, and a model that isn't funded doesn't protect anyone.," said Therrien. "Canada shouldn't copy the European Union. It won't wait for the courts either. It will probably keep doing this network governance, but it needs the political courage to name it and give budget to enforce it."
The United States-founded lawyer-made AI platform, Harvey, and Canada's Spellbook offer services ranging from litigation briefs to antitrust workflows for transactional lawyers to litigators. Last month, Harvey was valued at USD $11 billion.
Generalised large language models are also used by law professionals to aid case work.
In recent news, Wall Street law firm titan Sullivan & Cromwell released a note on April 18, apologising to a federal judge for ChatGPT hallucinations in a bankruptcy case.
"The firm's policies on the use of AI were not followed in connection with the preparation of the Motion. In addition, the Firm has general policies and training requirements for the proper review of legal citations. Regrettably, this review process did not identify the inaccurate citations generated by AI, nor did it identify other errors that appear to have resulted in whole or in part from manual error," wrote the firm's co-head of Global Finance & Restructuring, Andrew G. Dietderich.
Over three dozen corrections were outlined in Dietderich's note.
Canada is not without its similar issues. In the B.C. Supreme Court, Zhang v. Chen brought forth two cases in 2023 and 2024, respectively: the underlying high‑value family separation case and a later, widely cited costs decision about ChatGPT fabricating two supposedly on‑point precedents that the applicant's (Chen) lawyer then cited in a notice of application.
Chen brought the application seeking permission to take his children to China. The mother (Zhang) was ultimately treated as the substantially successful party for costs purposes when his application was dismissed.
The case became a comparative reference for later cases, including Ontario's Superior Court case Ko v. Li last year, in which Li's counsel had to defend why she should not be held in contempt of court.
"My actions fell far short of the ethical standards expected of an officer of the court and have caused me immense personal remorse. I fully acknowledge that misleading the Court, even unintentionally at first and then deliberately to mitigate my shame, erodes the integrity of the judicial process," said Li's counsel, Jisuh Lee, in a letter to the court dated September 30, 2025.
Therrien added that there is no law prohibiting law professionals from using AI tools in their work, but the onus is on the user to provide reasoning and defence for anything submitted to the court.
"What started as a bad citation escalated them to be a criminal content referral. Not because the lawyer used AI, but because the lawyer lied about using it. So that's a real lesson here. The courts are not punishing AI, they're punishing lack of candour and professionalism."