DNB's AI Guidance: Balance of Innovation and Prudence

Applications of AI


AI adoption in Dutch insurance

DNB's industry assessment revealed that out of the 36 insurers surveyed, 15 have already applied AI to their business processes. Common applications include analyzing unstructured data for risk assessment, developing personalized product recommendations, implementing fraud detection mechanisms, and automating billing processing. Most insurers view AI primarily as a tool to improve operational efficiency and improve customer experience.

A notable finding is that few insurers have established a dedicated AI roadmap or a clear long-term strategic vision. When asked about AI development obstacles, insurers cited the insurers' lack of internal expertise, insufficient fundamental drawbacks in data infrastructure and data quality, and the main obstacles progressed.

Balanced risk perspective

Most insurers identified non-financial risks as their main concern, including potential reputational damage and business continuity issues. Many companies expressed concern that opaque, AI-driven decisions could erode customer trust or violate ethical norms.

Although NB acknowledges these concerns, financial (and other prudential) risks are DNB's main concern. ”. The DNB review specifically identified specific AI applications, particularly those in asset allocation or spare optimization domains. Consumer interests or the financial soundness of the insurance company.

Chart of AI Regulatory Passes in Insurance

DNB clearly establishes that insurers are fully covered by all existing legal requirements when deploying AI solutions. Financial institutions are expressly expected to use all current compliance obligations fully applicable to AI-driven activities responsibly. These include data protection regulations, consumer protection laws, anti-discrimination provisions, and Solvency II governance requirements.

Additionally, the European Union's AI Act, which came into effect in 2024 and began phased implementation in early 2025, introduces additional specific requirements for high-risk AI systems. DNB emphasizes that insurers must fully comply with these standards. These criteria cover rigorous risk assessments and human monitoring of specific algorithmic systems. Even AI systems require proper control and monitoring, even if they are not formally classified as “high risk.”

DNB is aligned with the approach to the future sector-specific guidance from the European Insurance and Occupational Pensions Administration (EIOPA) expected in the second half of 2025. This signal paid close regulatory attention to AI in insurance where surveillance mechanisms are likely to evolve as the broader regulatory environment develops.

The safest design

Until certain guidance is made available by EIOPA, the basis of DNB guidance remains the six safest principles that define the concept of “responsible AI” regulatory authorities in the financial sector.

  1. Health: AI applications must demonstrate technical robustness, reliability, and accuracy and operate within the scope of applicable rules and regulations. Insurance companies should conduct thorough testing and validation of AI models to prevent errors and ensure that the models receive high-quality data input. From a Prudential perspective, DNB considers health to be paramount. Systematic risks can potentially emerge if multiple companies rely on similarly flawed AI tools.
  2. Accountability: Insurers need to maintain clear human accountability for decisions and outcomes generated by all AI. DNB expects organizations to specify the appropriate monitoring mechanisms and assign ultimate responsibility for AI-driven processes within their management structures. This requires a robust governance structure surrounding AI deployment, maintaining algorithmic control within defined risk appetite parameters.
  3. Fairness: Implementing AI should not undermine fair treatment of customers or introduce bias into decision-making processes. Insurance companies are expected to define fairness within a specific context and demonstrate that AI models adhere to these standards. This includes conducting comprehensive bias audits and using diverse datasets for model training.
  4. ethics: Beyond legal compliance, DNB emphasizes the importance of ethical considerations in AI use. Ethical AI practices include respecting customer privacy, taking into account the social impact of personalized pricing strategies and maintaining an appropriate level of solidarity in insurance risk pooling. The Association of Dutch Insurance CompaniesEthisch Kader PatagedReven Toepassingen“The (ethical framework for data-driven applications) represents a valuable industry initiative that DNB views favorably.
  5. skill:DNB expects insurers to invest in AI knowledge development across the organization. From board directors to frontline staff, personnel need to understand the fundamentals of AI model manipulation, including limitations and potential risks. This requires the hiring of specialized data scientists and training senior management positions to ask appropriate questions about AI initiatives.
  6. Transparency: Insurance companies should prioritize transparency regarding AI use. This is to explain the decisions that AI was generated and clearly communicate where it is applied. This does not require disclosure of your own algorithms, but you need to maintain documents that can be appealingly communicated to regulators and, if relevant, communicated to customers.

Collectively, these safest principles constitute a comprehensive framework for AI governance in the insurance sector. The DNB message is clear. Insurers should actively integrate these principles into policies and systems to ensure that AI innovations remain within safe and acceptable parameters.

Implementable Implementations for Insurance Companies

In light of DNB's guidance, insurers operating in the Dutch market should implement specific measures to meet supervisory expectations. The main focus is developing clear AI strategies and governance structures. This may include establishing a dedicated AI committee that includes representatives of compliance and risk management, adopting internal policies regarding AI use, and documenting AI applications currently being used or planned.

Creating a detailed inventory of current AI systems is important to determine which applications are subject to AI law requirements. An early compliance plan is highly recommended, especially for high-risk AI systems that may need to register or comply with certain standards.

Additionally, insurance companies must incorporate the safest principles into their operations through specific practices. This includes conducting rigorous model validation, assigning clear responsibility for AI outcomes, reviewing models with potentially biased outcomes, enacting ethics reviews before deploying new AI solutions, investing in training programs to enhance internal AI expertise, and maintaining thorough documentation.

Third-party AI risk management represents another important area. DNB clearly emphasizes the need for insurers to control the risks arising from AI systems provided by external vendors. This includes implementing thorough due diligence and incorporating contractual provisions that address data quality, performance metrics, audit rights and regulatory compliance.

Finally, insurers need to maintain active engagement with regulators and industry initiatives as AI surveillance continues to evolve. DNB has indicated that it will conduct a more detailed, risk-based trial of AI implementations for certain insurers in the second half of 2025, suggesting that regulatory expectations may continue to evolve.

Put AI accountable

Along with important considerations for DNB for 2025 and the expected guidance from AI regulations and EIOPA, it clearly shows that the era of “light touch” experimenting with AI in insurance is approaching. In the future, Dutch insurers are expected to apply the same level of rigor to AI model governance as other important risk or compliance issues.

By proactively strengthening AI surveillance through an organizational culture that emphasizes comprehensive strategies, detailed policies, and the use of ethical AI, insurers will not only meet regulatory requirements, but also position them in an increasingly favorable position in AI-driven markets. As DNB has properly observed, sound and ethical AI practices are essential to maintain public trust and operational stability in the insurance sector as they accelerate technological change. Insurers who are proactively aligned with the new guidance are best positioned to “safely” harness the benefits of AI.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *