Understanding the risks of new silent AI
As policyholders who integrate Y integrate artificial intelligence (AI), insurers face an increasing challenge of managing new categories of risk. This could trigger claims under traditional insurance contracts that are not specifically designed to cover these risks, such as professional coverage, business interruptions, and executive (D&O) liability insurance.
Over the past decade or so, we have seen insurance companies face “cyber risks” that unconsciously cover cyber incidents under general policies. Insurance companies have helped them to check if there are cyber cyber issues and draft specific reports of cyber risk. Currently, we may see the emergence of “silent AI” where insurers carelessly cover AI risks, including financial, operational, regulatory and reputational risks arising from the deployment and use of AI.
Key legal and regulatory risk factors
It is important to analyze the associated AI risks and the losses and damages associated with them to see if these are covered or excluded. To do this, we need to consider how AI will use it, including levels of human supervision rather than the output generated. For example, a product or service may be AI-assisted, resulting in the policyholder being liable as the provider of that product/service, not the AI developer. This allocation of liability is important when evaluating policy coverage under professional compensation or product liability policies.
Data Privacy Considerations
When AI systems are trained, collected or generated sensitive personal information, important data privacy issues arise. You may need to consent to use personal data collected for other purposes used to train AI models. It is also important to ensure the security of your training dataset and output to avoid illegal disclosure and violations.
Algorithm bias and discrimination
Biasing in AI training data can cause discriminatory output and unfair practices, which can be difficult to detect and correct at first. In fact, if an insurance company or policyholder uses AI biased towards underwriting, hiring, or customer decision-making, you may face illegal discrimination or enforcement claims. .
These are just some of the issues that need to be addressed to ensure that insurance companies are not offering silent AI cover.
Regulatory development in the Asia-Pacific region
Asia-Pacific (APAC) regulators are proactively developing governance frameworks to address the responsible use of AI in financial services and data privacy.
The Singapore Monetary Authority (MAS) recently published two notable frameworks.
- Information Paper on AI Model Risk Management We set up excellent practices for AI and Generated AI Model Risk Management, encouraging all banks and financial institutions to refer to it when developing and deploying AI. and
- The principles of feats to promote fairness, ethics, accountability and transparency in the use of AI and data analysis (AIDA) in the financial sector of Singapore. It aims to provide businesses with basic principles to consider when using AIDA, developed in collaboration with the Singapore Personal Data Protection Commission.
Meanwhile, several data protection authorities, including those in Australia, Hong Kong and South Korea, have recently published their own AI-specific privacy guidelines to address the risks posed by large-scale data processing, particularly in public AI applications such as chatbots and generated AI.
Claim landscape and insurance considerations
There may be many types of billing that arise from AI technology. Potential exposures include:
- Consumer protection claims related to AI-driven decisions that are false or biased, misleading or flawed.
- Data protection claims arise from the use of personal data without consent or are insufficient transparency regarding the way personal data is collected and used. .
- Employment disputes arising from AI-driven decisions in employment, promotion or termination.
Intellectual property lawsuits, especially when the generated AI is allegedly infringing copyright.
There is also the temptation to label any technology as “AI” whether or not AI is actually involved. Certainly there is a risk that this kind of misleading claim or “AI washing” could violate consumer law.
Insurers may find that traditional language is insufficiently clear as to whether such risks are covered or excluded. A proactive analysis of policy languages – particularly in relation to exclusions, clause assurances and definitions – is essential. Perhaps ironically, AI itself may be part of the solution. Insurers may be able to help insurers identify silent AI exposures by reviewing policy language, claim patterns, and emerging risk signals.