Ethical Use of AI in Insurance Modeling and Decision Making – New Technologies

AI and ML Jobs

To print this article, simply register or log in to

Insurers’ use of external consumer data sets and artificial intelligence (AI) and machine learning (ML) enabled analytical models is rapidly expanding and accelerating as next generation technology and data mining tools become more available . Insurers will initially focus on key business areas such as underwriting, pricing, fraud detection, marketing distribution and claims management to leverage technological innovations to enhance risk management, increase revenue and improve profitability. has been targeted. At the same time, regulators around the world are increasingly paying attention to the governance and equity challenges posed by these complex and highly innovative tools.
Specifically, the potential for unintentional prejudice against protected classes of people.

the regulator heats up

In the United States, the Colorado Department of Health recently issued the nation’s first draft regulations to support implementation of the 2021 legislation passed by the state legislature.1 This law (SB21-169) prohibits life insurance companies from using external personal data and information sources (ECDIS) or from using algorithms and models that use ECDIS. The resulting effect of such use is unfair discrimination against consumers on the basis of race or skin color. , national or ethnic origin, religion, gender, sexual orientation, disability, gender identity or expression.2 The Colorado Department of Insurance also provided guidance in a pre-release public meeting with industry stakeholders that similar rules should be expected for property and casualty insurers in the not too distant future. Similarly, UK and EU regulators now prevent consumer bias based on AI models, ensure the transparency and explainability of model-based decision-making to customers and other stakeholders, and ensure that these We are creating new policies and legal frameworks to hold accountable insurers that utilize our capabilities. .3

Clearly, regulators around the world believe that well-defined guardrails are needed to ensure the ethical use of external data and AI-powered analytics in insurance decision-making. Additionally, in some jurisdictions, public oversight and enabling agencies such as the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) also leverage ML without discriminating against AI/protected classes of consumers. decision support model.Four Examples of potentially objectionable data include:

  • personal credit score

  • social media habits

  • home ownership

  • educational background

  • driver licence

  • civil judgment

  • court record

  • Occupations not directly associated with mortality, morbidity, or longevity risk

  • Insurance risk scores derived from this and similar information

  • Data such as individual consumer purchasing preferences that act as a proxy for protected information

New reporting rules need more resources

Based on the recently released Colorado regulatory drafts, the anticipated breadth of pending new AI and external dataset regulations will drive the need and principle for proactive risk management, market penetration, and profitability targets. could represent a potentially formidable operational challenge for insurers trying to balance Consumer Fairness. For many insurers, in-house data science and technology resources, already overwhelmed with “day-to-day operations,” are not meeting the expected reporting and model-testing obligations across the multiple jurisdictions in which the company operates. insufficient to fulfill. In other situations, insurers may lack adequate test data and skill sets to assess potential model biases. In either case, model testing and disclosure obligations will continue to increase, requiring support to meet regulatory demands and avoid significant business impact from non-compliance.

So how can insurers and their data science/technology teams meet the operational challenges that evolving data privacy and model ethics regulations will surely present? , you can choose to partner with seasoned experts who understand the data and processing complexities of nonlinear AI/ML-enabled models. The best of these external providers provide in-depth knowledge of the insurance domain to ensure test context, reliable, independent, and market-proven test data and provide test data and test methodologies that can be similarly easily explained and explained to

Balancing compliance and financial performance

The burden of regulatory compliance in the insurance industry cannot be alleviated, and firms achieving targeted business benefits if the two seemingly conflicting goals of compliance and profitability are not managed in an aggressive, strategic and supportive manner. can challenge the ability of With proper guidance and enforcement, insurers complying with the new regulations on the use of AI-powered decision support models and external datasets can go beyond compliance, including more stable analytical insight models, new business improvements, and more. Many tangible benefits can be realized in practice. Deliver profitability, operational scalability, and a superior customer experience that increases brand loyalty, driving customer retention and enhanced lifetime value.


1. “SB21-169 – Protecting Consumers from Unfair Discrimination in Insurance Practices,” Colorado Department of Regulatory Affairs, Insurance Division (last accessed February 22, 2023), sb21- 169 – Protect consumers from unfair discrimination in insurance practices.

2. “SB21-169 – Restricting Insurers’ Use of External Consumer Data,” Colorado General Assembly (last accessed February 22, 2023), 169.

3. Claudio Calvino and Meloria Meschi, “AI Bias: EU Artificial Intelligence Law Coming – Need to Prepare,” (13 December 2022).

4. Reva Schwartz et al., “Towards Standards for Identifying and Managing Bias in Artificial Intelligence,” NIST Special Publication 1270 (March 2022).

The content of this article is intended to provide a general guide on the subject. You should seek professional advice for your particular situation.

Popular Articles: US Technology

Breaking news: AI-assisted works can be copyrighted

Frankfurt Kurnit Klein & Selz

Contrary to certain perceptions that not all AI-generated works are copyrightable, the U.S. Copyright Office has established a set of guidelines for reviewing and registering works that “contain material generated through the use of artificial intelligence technology.” We have issued a policy statement clarifying our practices.

FTC issues guidance on AI-powered products

Foley & Lardner

AI models recently scored well enough on the bar exams recognized in most states. AI is booming and it’s safe to say that OpenAI’s latest catchphrase for his ChatGPT…

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *