According to the Financial Times, AIG, WR Berkeley and Great American have each asked regulators to approve new exemptions that would allow them to reject claims related to the use or integration of AI systems, such as chatbots, agents or anything embedded deep within a workflow.
Big insurers are scrambling to cut off their exposure to AI failures as a spate of costly public mistakes pushes concerns about systemwide losses to the top of their risk models.
Companies around the world are rushing to generate tools, and the impact is already dire. Google is considering a $110 million defamation lawsuit after AI Overview falsely claimed the solar company was sued by a state attorney general.
Air Canada needed to honor the discounts invented out of thin air by the service’s chatbot. British engineering firm Arup lost £20m after its staff were scammed by a digital clone executive over a video call. This makes it difficult for insurance companies to draw a clear line regarding liability.
Mosaic Insurance said LLM production remains too unpredictable for traditional underwriting. They called the model a black box.
Mosaic sells professional coverage for AI-enhanced software, but still refuses to take on LLM-driven risks, including systems like ChatGPT.
Some of the proposed exemptions are far-reaching. One of WR Berkeley’s versions would block claims related to the “actual or suspected use” of AI, even if the technology was only a small part of the product. AIG told regulators that it does not intend to immediately trigger an exemption, but would like to have one in place as the frequency and severity of claims continues to rise.
The concern is not just about large losses for a single company. This is a nightmare scenario. One upstream model or vendor misfires and suddenly 1,000 insureds are affected simultaneously.
Kevin Kalinich, head of cyber at Aon, said the market could handle losses of $400 million or $500 million related to one company’s agency. What it cannot handle is a wave of correlated failures surging through the system.
Some carriers are attempting partial fixes with approval. QBE has added a service offering capped protection against fines limited to 2.5% of the insurance limit under EU AI law.
Chubb agreed to cover certain AI incidents while excluding those that could cause widespread, simultaneous harm.
Brokers say that while these endorsements appear to offer more protection, they are narrower in scope and should be read carefully.
As carriers and regulators redraw boundaries, companies may find that the risks of AI adoption weigh much more heavily on their balance sheets than expected.
