Businesses and organizations transitioning to deploying artificial intelligence may be creating new risks in the process, according to a report released Thursday by Lockton Re and cyber insurer Armilla AI.
Biases in model training can be replicated in results, and errors in AI models can lead to data corruption, the report says. AI systems can also produce “hallucinations,” or false information. There are also potential regulatory and intellectual property issues, according to Amira and brokerage firm Lockton’s reinsurance business Lockton Re.
Such risks can cascade across multiple insurance areas, including cyber, personal injury, directors and officers, and employment practices liability.
“Systems risk” is also a potential issue when widely deployed AI models or systems cause disruption across multiple organizations or geographies.
The emerging and evolving nature of technology and its potential threats make analysis even more difficult. “The risks and long-term effects associated with AI are still being studied, but there is limited information available at this time,” Lockton Lee and Urmira said in their report.
Some new products that proactively compensate for exposures such as model errors are beginning to address and close potential gaps in coverage that may arise, the report said.
