The legal gap in AI is not just about compliance issues, but also about business risks.

AI For Business


A new report from Zendesk outlines the growing issues for businesses deploying AI tools. Many are not prepared to manage risk. AI Trust Report 2025 finds that while AI moves to customer service and support, only 23% of 23% feel ready to manage it.

AI Governance Legal

The report highlights concerns ranging from data privacy to model bias. However, the core issue is trust. If the customer doesn't understand or feel comfortable using AI, they are less likely to be involved. And if companies don't have frameworks in place, they are exposed to legal, reputational, and operational fallout.

Compliance has not been caught up

One of the biggest concerns of legal teams is the fragmented nature of AI regulations. EU AI laws are centrally set in the world, but many countries and US states have developed their own frameworks. This means that businesses need to adhere to multiple, sometimes conflicting sets of rules.

According to the report, only 20% of companies have a mature governance strategy for generating AI. This means that most companies are often rushing to build processes for consent, data handling, model monitoring, and explanationability after the tool is already in use.

For CISOS and CLOS, this later involvement can be a problem. Legal reviews can be too late to shape the design of a system or vendor choice, increasing the likelihood of regulatory failure.

Zendesk Chief Legal Officer Shana Simmons told HELT Security: “Our AI governance is built around core principles that apply to legal jurisdictions, such as design, transparency and explanability, customer control, and more, and more risk.”

AI introduces new types of risk

Researchers outline some of the inherent AI threats that legal teams and CISOs must understand. These include:

  • Jailbreak, users try to get AI tools or say or do something they shouldn't do
  • Fast injections where attackers manipulate AI behavior through input
  • Hallucinations in which AI produces false or manufactured information
  • Data leakage where sensitive information ends in AI output

These risks go beyond the typical IT threat. For example, if the AI model gives customers the wrong answers or leaks personal information, your business can face both legal claims and reputational harm. And if you can't explain or audit the actions of that AI, it's much more difficult to defend those decisions.

Customers are hoping to oversight

Customers are paying attention. Zendesk cites research that shows that customers want to feel “respected, protected and understood” when interacting with AI. This means that companies must go beyond simple disclaimers or checkboxes.

Customers expect to know when AI is involved, how it works, and what controls there is over the data. If these expectations are not met, businesses could see terminations, customer complaints, or even class actions, especially in regulatory industries such as healthcare and finance.

For legal teams, they raise new questions about product design, vendor contracts, and internal accountability. If AI doesn't work, who owns the risk? What if the agent relies on flawed AI recommendations? Those are business questions that Close and CISOS need to answer together.

What legal instructors can do now

Companies that treat AI governance as an afterthought put themselves at risk. For legal teams, the response should be proactive, not reactive. That means working closely with cisos.

  • Current AI deployment audits for gaps in transparency, fairness, or consent
  • Build a flexible compliance framework that can be adapted to as laws evolve
  • Make sure your vendor is contractually bound by governance standards
  • Participate early in your AI product plan as well as final reviews

Most importantly, it means helping businesses set up guardrails. If a customer sues an AI decision, the company should be able to show how the decision was made, who reviewed it, and what safeguards are in place.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *