|
ALIA reminds Alberta lawyers to use caution and exercise diligence when using generative artificial intelligence (“Generative AI”) tools such as ChatGPT or other large language models in their practices. In the recent decision of Reddy v Saroya, 2025 ABCA 322 (“Reddy”), the appellant filed a factum that was found to contain references to fabricated case authorities. The appellant’s counsel advised the Alberta Court of Appeal (“ABCA”) that he had retained a third party to draft the factum and had not verified the cited cases prior to submission. Ultimately, the appellant was permitted to file an amended factum, with potential cost consequences to follow.
In Reddy, the ABCA reiterated the professional responsibilities of lawyers opting to utilize Generative AI tools in their practices. The ABCA stressed that lawyers are required to verify and cross-reference submissions generated by Generative AI, and that if a “lawyer engages another individual to write and prepare material to be filed with the court, the lawyer whose name appears on the filed document bears ultimate responsibility for the materials form and contents.”
The ABCA cited three key resources guiding lawyers’ responsibilities and obligations in the use of Generative AI in their practices.
First, the ABCA noted that Rule 3.1-2 of the Law Society of Alberta’s (the “Law Society”) Code of Conduct requires lawyers to perform all legal services to the standard of a competent lawyer, and that the relevant commentary explains that lawyers should “develop an understanding of, and ability to use, technology relevant to the nature and area of a lawyer’s practice and responsibilities.”
Second, the ABCA cited the Law Society’s “Generative AI Playbook” as a resource which should be the “starting point for Alberta lawyers seeking to harness the benefits of disruptive technologies like [Generative AI] while safeguarding their clients’ interests and maintaining their professional competence.”
Finally, the ABCA cited the Alberta Courts’ Notice to the Public and Legal Profession dated October 6, 2023, titled “Ensuring the Integrity of Court Submissions When Using Large Language Models” (the “Notice”) as a tool to be followed to reinforce the integrity and credibility of legal proceedings. The Notice urges those using Generative AI to 1) exercise caution when referencing legal authorities or analysis derived from Generative AI, 2) rely exclusively on authoritative sources, and 3) ensure there is always a “human in the loop” to verify any AI-generated submissions with “meaningful human control.”
For more information about practice directions issued by Canadian courts, see the May 2024 ALIAdvisory: Practice directions push for transparency when Artificial Intelligence is used in legal matters as ChatGPT inaccuracies make headlines. Note that several Canadian courts have issued further practice directions since this ALIAdvisory (i.e. in Nova Scotia and Quebec).
In addition to potentially significant consequences for clients, the ABCA warned that lawyers who do not adhere to the Notice may also face consequences which will be within the discretion of the panel or individual judge. The ABCA stressed that counsel (and self-represented litigants) “should not expect leniency where they have failed to adhere to clear and unambiguous requirements.” Courts may consider remedies including striking submissions or imposing costs awards (including against counsel personally). Courts may even determine that a penalty should be imposed, contempt proceedings initiated, or that a referral to the Law Society is warranted. In Reddy, the ABCA invited further submissions from the parties (including the appellant’s counsel) on whether the panel should direct that the appellant’s lead counsel pay a cost award, and if so, in what amount.
ALIA adds to this warning that the Alberta Lawyers’ Professional Liability and Misappropriation Indemnity Group Policy does not cover penalties or costs awarded personally against a Subscriber as a result of the Subscriber’s conduct in litigation.
Reddy is an important reminder that lawyers who use Generative AI in their practices must understand the potential benefits and risks, exercise caution and conduct meaningful cross-referencing in every instance to ensure that citations and content hold up to scrutiny.
|