Client Alert 

March 9, 2026


Court Rules that AI-Generated

Materials Are Not Privileged

In a question of first impression nationwide, the U.S. District Court for the Southern District of New York recently held that a client’s communications with a publicly available generative AI platform were not protected by the attorney-client privilege or the work product doctrine.

 

In the case, the defendant used an AI chatbot to draft documents to analyze his legal exposure and prepare a potential defense strategy after learning that he was the target of a government investigation – without guidance from his counsel to do so. The documents were seized by the government in connection with the defendant’s arrest, and his counsel asserted that the documents were protected from disclosure under the attorney-client privilege and work product doctrine.

 

The court declined to extend these protections to the AI-generated documents and communications based on three core findings:


  1. The communications did not fall within the attorney-client privilege because they were not communication between a client and his attorney;
  2. There was no reasonable expectation of confidentiality in the communications, as the platform’s terms of service made clear that user inputs and outputs could be used for model training and could be disclosed to third parties; and
  3. Materials prepared with AI on the client’s own initiative, rather than at counsel’s direction, could not be shielded as work product.

 

Although the matter was a criminal case, the court’s reasoning has broad implications for employers. Managers, HR professionals, or in-house personnel may be tempted to use generative AI tools to analyze discrimination or harassment complaints, summarize legal risks in employment disputes or requests for accommodations, or prepare notes and strategy documents before contacting counsel. Under the Court’s ruling, such AI interactions may not be privileged and could potentially be discoverable in litigation.

 

Based on this decision, employers should consider the following measures:


  • Limit the use of public AI tools when analyzing legal or HR issues;
  • Train HR and management on the privilege risks related to the use of AI;
  • Consult counsel early when workplace disputes arise and following counsel’s guidance closely; and
  • Evaluate secure AI platforms with stronger confidentiality protections for internal use.

 *           *           *


If you have questions or would like additional information, please contact our Labor & Employment attorneys or the primary EGS attorney with whom you work.


This memorandum is published solely for the informational interest of friends and clients of Ellenoff Grossman & Schole LLP and should in no way be relied upon or construed as legal advice.