Who will cover AI business failures? Some insurance companies are cautiously stepping up their response

AI For Business


As more companies rely on artificial intelligence “agents” to independently drive revenue, some insurers are stepping in to cover their mistakes, while others are taking a cautious stance.

“The aim of using advanced AI is to effectively replace human assistance and oversight in decision-making,” said Phil Dawson, head of AI policy and partnerships at specialist insurer Armira.

The “Agent AI” trend is in full swing, with bots handling computer tasks themselves and companies reducing their human workforce as a result.

“This really challenges some of the basic logic of existing insurance coverage,” Dawson says.

While companies in the AI ​​race are working to perfect the technology, the possibility of errors, such as “hallucinations” where fabricated output is confidently presented, is not ruled out.

Analyst Sonal Madhok and law professor Anat Riolu said in a research paper published late last year by brokerage firm Willis Towers that AI-related liability risks are primarily implicitly accounted for in insurance contracts, known as “silent coverage.”

However, they argue that this situation is similar to the liability issues raised in the early days of cybercrime.

“We can hope that in the near future, policies will explicitly address AI and bring an end to the era of silent reporting,” Lior and Madhok said.

Insurers are already moving away from a “wait-and-see approach” when it comes to AI incidents, said Jonathan Mitchell, head of financial sector practice at brokerage firm Founder Shield.

Mitchell said some standard policies now include “absolute AI exclusion” clauses that explicitly deny coverage for AI-related incidents.

Dawson gave the example of a commercial real estate company that tried to hire AI agents as full-time employees, but was forced to revert to special policies.

“AI malfunction” protection

Founder Shield specifically incorporates the “AI malfunction and hallucination” scenario into its professional services policy, which covers losses caused to clients by the technology.

The scope of such policies could, at a price, extend beyond computer networks to cover real-world damage, such as an AI accidentally ordering too much inventory from a company.

Before initiating coverage, Armilla tests AI models for vulnerabilities and assesses whether a client’s risk management framework complies with international standards.

However, like other insurance companies, Armira may refuse to underwrite certain risks.

For example, anything related to medical diagnostics or mental health-focused applications will not be covered.

Munich Re, a leading global insurance and reinsurance company, provides insurance to companies that design AI models and those that use the technology.

“This risk of model errors or illusions cannot be completely avoided by any technical means,” said Michael von Gablenz, head of AI insurance at Munich Re.

“At the end of the day, AI systems are statistical models, and all statistical models include uncertainty,” he said.

However, AI risk presents a huge opportunity for insurers, with von Gablenz estimating that the market could surpass the size of cybersecurity insurance.

The Deloitte Center for Financial Services predicts that the global AI premium market could grow to up to $4.8 billion by 2032.

(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)




Source link