In today’s world, artificial intelligence, and now generative artificial intelligence (GenAI), is everywhere. Many of us use it as a useful tool to draft emails, plan trips, and choose a restaurant for a date. Generative AI tools like ChatGPT and Claude are so useful that companies should consider how to implement them into their business and use them responsibly.
While these tools are powerful and useful, they are not a substitute for legal advice. Generative AI is not a lawyer. And most importantly, generative AI is not a lawyer. your lawyer.
A recent federal lawsuit highlights another example of the law not keeping up with technology and how the use of GenAI must be done in an intentional and responsible manner.
in America vs. HeppnerNo. 25 CR. 503 (JSR), 2026 WL 436479 (SDNY, Feb. 17, 2026), the U.S. District Court for the Southern District of New York held that documents created by the defendant using a public AI platform (in this case, Claude) were not protected by the attorney-client privilege or the work product doctrine.
Prior to his arrest, the defendant used publicly available GenAI tools to research issues relevant to the government investigation and created a series of documents based on those prompts. In doing so, he entered strategic information and facts that he then shared with his lawyers in a privileged conversation. This includes understanding the facts and law as he interprets them. He eventually provided the AI-generated materials to his lawyer.
After seizing his device, the government discovered the AI’s conversations and documents and attempted to use them in the case. Defendants argued that both their inputs and the AI’s outputs were protected by the attorney-client and work product privileges. The court disagreed and granted the government access to both the documents and the underlying AI communications (inputs and outputs).
The court’s reasoning was straightforward.
- Communicating with an AI tool is not communicating with a lawyer.
- Information shared with public third-party platforms is generally not confidential.
The court specifically examined the user agreements of public AI platforms that permit the collection and potential disclosure of user inputs and outputs to third parties, including regulators. Because the defendant voluntarily shared the information with third parties under these terms, the court concluded that there was no reasonable expectation of confidentiality.
Notably, the defendants used publicly available AI tools rather than closed or enterprise AI platforms. This tool allows organizations to retain control of their data through underlying terms with providers, who are contractually prohibited from accessing, sharing, or disclosing user inputs and outputs. Had the defendants used such enterprise tools, the court’s analysis of expectations of confidentiality likely might have reached a different conclusion.
The court also rejected the defendant’s claim that he was seeking “legal advice” by invoking privilege protection. Because the AI tool itself included a disclaimer (e.g., “I am not a lawyer…”), the court found that the defendant should not have expected “legal advice” from the tool.
Importantly, the court noted that the defendant’s subsequent provision of materials to his attorney would not “settle” the issue. Benefits are not granted retroactively. And to the extent that other privileged information was included in his AI prompts, the court found that privilege was waived if he disclosed that information to the AI platform, just as if he had shared the information with any other third party.
In contrast, one recent decision found that documents created using or by publicly generated AI by professional parties in anticipation of litigation may be subject to work product protection. The court noted in part that AI platforms are “tools,” not “people.” See Warner v. Gilbarco, Inc.No. 2:24-CV-12333, 2026 WL 373043, *5 (ED Mich. February 10, 2026). Although the facts may be different, it remains unclear what GenAI is in terms of its privilege as software used by clients or third parties.
So what do these seemingly contradictory opinions mean? So, don’t assume that documents created using publicly available AI tools are protected, even if they were created with litigation in mind. Enterprise tools with appropriate security features are more likely to keep work products protected than publicly available tools. If in doubt, please consult a human attorney.
What does this mean for your business and your use of generative AI? Here are some practical takeaways.
- Adopt internal guidelines. Implement clear policies governing employee use of GenAI tools, especially when dealing with legally privileged or other sensitive information, including sensitive information from or about customers. Policies should address both publicly available and enterprise tools and be clear about who can use GenAI and for what purposes. While you may be tempted to use GenAI for any reason, establish clear use cases and guardrails to ensure you use GenAI in a meaningful, responsible, and transparent manner. Pending further resolution of the law, we must rely on our own internal governance framework to ensure the protection of our confidential, trade secret, intellectual property and privileged information. See, for example, the recent FBT Gibbons article on the use of GenAI in the context of employment law.
- Treat publicly available AI tools like any external communication. Existing policies and procedures regarding confidential, sensitive, trade secret, and privileged information should also apply to the use of publicly available AI tools. If the information shouldn’t be made public outside your organization, don’t enter it into publicly available AI tools.
- Do not expect GenAI to provide privileged legal advice. Even when using private enterprise AI tools, the AI is not a lawyer, so seeking legal advice from an AI does not trigger attorney-client privilege. This is true regardless of how secure the platform is or what protection measures are installed. If you use GenAI to prepare documents for litigation, those materials may be subject to work product protection, but work product protection is narrower than attorney-client privilege and varies by jurisdiction. A lawyer can advise you on whether and how to use GenAI securely or handle AI tasks within a protected channel.
As generative AI continues to evolve, so too will the legal landscape surrounding privilege and confidentiality. Taking proactive steps now will help protect your business and preserve your legal options in the future.
At FBT Gibbons, we are focused on adopting and leveraging generative AI to help clients realize efficiencies while freeing up lawyers to focus on more complex and strategic legal issues. We also help clients evaluate the right tools to achieve their goals and create approaches that support innovation and responsible use of AI designed to maintain privilege and confidentiality while building and maintaining trust with clients.
