As artificial intelligence (AI) becomes increasingly sophisticated, it raises important questions for civil litigation and the reliability of AI-generated evidence in particular. To explore some of the issues, on Friday, April 12th, the Civil Justice Research Initiative hosted “AI and Evidence in Civil Litigation: An Introduction and Discussion of the Issues” with Berkeley’s Center for Law and Technology. The symposium featured four panels of scholars, practitioners, AI experts, and judges. In addition to providing an introduction to how AI is already impacting civil practice, the program discussed the potential implications for the admissibility and presentation of evidence, as well as how AI is likely to impact civil litigation in the future. The program was made possible thanks to a generous gift from the AAJ’s Robert L. Habush Endowment.
In the first panel, Karen Silverman, retired attorney for Latham & Watkins LLP and CEO and founder of The Cantellus Group, addressed the “two very different beasts” that are AI and the legal system. According to Silverman, the interaction between the two has at times led to an over-reliance on technology. She specifically cited hearings where made-up case law brought to the forefront the skepticism toward AI in litigation and the role legal professionals play in human oversight and intervention. Other panelists provided an overview of the challenges that AI poses for the reliability of evidence and the attempts to legislate the use of AI-generated evidence in cases.
The second panel focused more specifically on the implications for presenting evidence in civil litigation. Michele E. Gilman from the University of Baltimore School of Law and Ngozi Okidegbe from Boston University outlined several key problems with AI in law, including issues with transparency for clients; problematic automated systems being discovered too late in court; the difficulties that plaintiff’s counsel face in discovering and identifying relevant algorithms; and overall concerns with algorithmic competency.
Gilman and Okidegbe also emphasized the potentially discriminatory impacts of AI. Gilman described how AI-generated tenant screening reports may result in long-lasting data profiles that disproportionately impact marginalized communities in housing disputes. As a solution, Gilman called for a more interdisciplinary approach and greater engagement with impacted communities to better understand the broader implications of AI for civil justice.
Andrew Selbst from UCLA School of Law addressed the challenges courts face in deconstructing biases that are built into particular AI systems in ways that are not always readily apparent. Selbst then explained how some cases — such as products liability, copyright retransmission, and copyright and software cases — already interrogate technological design. He noted that the same type of interrogation should be conducted for cases involving AI.
For a comparative perspective, Sabine Gless from the University of Basel in Switzerland discussed the AI Act governing data quality in European courts. While this regulation aims to create “trustworthy” AI, Gless noted that the current EU “evidentiary toolkit” is inadequate for ensuring that the highest quality of data is admitted into evidence. Gless also stated that both European and United States courts face the same concern over AI becoming a type of “witness” without proper reliability testing.
The third panel honed in on the implications of AI for admissibility assessments and trials. Bryant Walker Smith from the University of South Carolina School of Law addressed the inclination for people to trust data produced by AI. This is problematic, Smith said, because “data is not conclusive. It actually can raise as many questions as it can answer.”
Deborah Nelson from Nelson Boyd Attorneys and Boyd Trial Consulting expressed a different concern over the financial implications or even wrongful convictions that could occur with more “aggressive” approaches that involve multiple opinions influencing AI admissibility in litigation.
Other panelists — such as Rebecca Wexler of Berkeley Law and Lindsay Freeman of UC Berkeley’s Human Rights Center — discussed the potential implications for the rules of evidence and best practices for using AI to gather and present evidence.
In the fourth and final panel, speakers approached the question of what the future of AI looks like in civil litigation. Magistrate Judge Peter H. Kang stressed that the bar will need to be more prepared with technological education going forward to understand “what’s really going on under the hood for each of the tools.” He also emphasized much of the AI litigation issues will likely occur in pre-trial litigation, making discovery a place where he expects to see the most discussion on AI usage.
Professor Andrea Roth from Berkeley Law took a slightly different approach to the question, stating that the function of the rules of evidence will depend largely on how the jury responds to AI in litigation. She drew parallels between AI and the introduction of photographs into the courtroom — a once-controversial change which led to a new category for demonstrative evidence. She also agreed with Brandie Nonnecke from CITRIS Policy Lab that these questions of AI and evidence are not entirely unprecedented. According to Roth, what has been new are the ways AI has increasingly exposed the gaps between evidence and law that have been there all along for all kinds of litigation proof.
Panelists were later asked about risk thresholds with AI in litigation. Nonnecke expressed concern for the lack of federal guidance in this arena, but she did mention the state of California has legislation lined up. “We actually have a bill that would require lawyers to keep a record of their use of generative AI tools for seven years,” Nonnecke stated in reference to Assembly Bill 2811. Lucilla Sioli, the Director for AI and Digital Industry for the European Commission, also offered a comparative perspective of risks associated with AI.
Ultimately, the panelists voiced concerns for the human bias to trust AI tools, especially in civil litigation. “Even if it’s certified, it will still make false positives and false negatives,” Nonnecke stated. The symposium therefore concluded with a cautionary tale as evidentiary AI becomes more prevalent and creates unknowns for both the human parties involved and civil litigation at large.
|