A new class action lawsuit filed in San Diego highlights significant risks for companies deploying AI tools that listen, record and summarize customer and patient conversations. On November 26, Sharp Healthcare was hit with a wide-ranging privacy lawsuit for secretly using an AI-powered “ambient clinical documentation” tool to record conversations between doctors and patients without proper consent. And while healthcare may be a target of this lawsuit, consumer-facing companies that use AI voice tools, quality assurance recordings, or conversation analysis engines should take note. This insight explains what happened, why plaintiffs see these lawsuits as high-value opportunities, and six steps companies can take now before their AI tools become the headlines of tomorrow's class action lawsuits.
Lawsuit highlights key risks facing AI recording tools
According to the complaint, Sharp deployed an AI vendor in April 2025 to automatically record clinical encounters on clinicians' devices and generate draft notes for electronic medical records. The lawsuit alleges:
- sharp didn't get it consent of all parties before recording confidential conversations between doctors and patients, as required by California's strict wiretapping laws (CIPA). Plaintiffs argue that ambient AI documents constitute electronic eavesdropping even if the vendor is not “listening” with human senses. Merely capturing and transmitting audio (even transcribing it) outside the organization is sufficient liability, they argue.
- medical information (symptoms, diagnoses, medications, treatment plans, and personally identifying information) were sent to the vendor's cloud system, where vendor personnel had access to the data, allegedly violating California's Confidential Medical Information Act (CMIA).
- false documentation Even if the patient was not listed in the AI record as having received advice or consented, it was noted in the patient's chart. The plaintiffs accuse Sharp of failing to use encounter-specific verbal consent, pre-visit notice, on-screen or audible indications that the recording was valid, or written permission.
- Sharp has advised patients that the vendor stores audio for approximately 30 days; Could not be removed immediately upon request.
The complaint seeks statutory penalties, punitive damages, injunctive relief, and complete correction of allegedly inaccurate medical records for a class of patients that could potentially exceed 100,000 patients.
Why this case matters beyond medicine
There are several reasons why companies in all industries should pay attention to this lawsuit.
AI recording tools create potential for CIPA exposure
As FP's information tracks, CIPA is one of the most plaintiff-friendly wiretapping laws in the country. Digital wiretapping litigation map. You may be paid $5,000 per violation, per call, and per recording. This calculation is why plaintiff companies continue to use call center recordings, chatbot summaries, or “voice intelligence” platforms to bring lawsuits against retailers, banks, service brands, and service providers.
AI vendors tout customer benefits
As more AI vendors publicly announce partnerships with major clients (“more than 1,000 providers use our ambient AI tools”), plaintiff companies see this as a simple roadmap. Plaintiffs use the existence of a public customer list as a pre-constructed class definition.
Theory applies across industries
The claims raised in this lawsuit, including wiretapping, improper disclosure to third-party AI vendors, false or misleading consent statements, retention failures, and lack of opt-outs, are exactly the same theories that are emerging in other industries. These include:
- Retail store customer service records
- Any customer intake call
- Financial services call analysis
- Hospitality and Travel Agency Chat/Voice Systems
- A company that is experimentally introducing “memo-taking using AI”
6 practical steps companies can take now
Here are six practical steps you can consider taking today to minimize your chances of being dragged into a class action lawsuit tomorrow.
1. Audit technologies that capture or transmit voice and text during customer interactions
The most common areas we see today include AI note-taking tools, whisper/API transcription tools, “agent assist” or “quality assurance analytics”, and virtual agents that record voice or text input. Map where your audio goes, who receives it, and how long your vendor stores it.
2. Implement clear consent protocols
Businesses should consider:
- Pre-interaction notification (on website, intake forms, appointment reminders, IVR prompts)
- Real-time consent At the beginning of the encounter
- visible/audible indicators If recording is active
- Separate written permission Involves California health or financial information.
3. Rewrite your vendor contract now
Check to see if your contract with your AI transcription or analytics vendor includes:
- Customer retention and deletion controls
- No secondary use of the data (training, QA, model development) is possible without explicit consent
- Immutable logging of accesses and deletions
- Certificate of Destruction Requirements
- Prohibition of access to identifiable records by vendor personnel unless specifically authorized
4. Prevent vendors from acting on their own
Don't agree to a vendor using your name as a customer or publishing a case study press release about the use of their AI tools without consulting your AI legal advisor about the implications. While it's common for companies to bring their marketing and PR teams on board when such opportunities arise, it's less common for companies to consult their legal department until an inevitable lawsuit occurs. In other cases, the vendor relations manager may approve a vendor's request to list the company as a customer on the website or provide a testimonial without obtaining approval from the legal department. Make sure this doesn't happen in your organization.
5. Disable the default “Agree” autofill
If your AI system inserts boilerplate text like “customer consent,” make sure it's turned off. You should aim to require manual verification, audit trails, and separation of consent capture and documentation fields.
6. Build fast and verifiable deletion workflows
Courts are increasingly considering on-demand deletion to be part of basic privacy hygiene, particularly in California. For this reason, businesses must be able to immediately stop processing, submit a verified deletion request to the vendor, and provide written confirmation of the deletion to the customer.
