Risk experts grapple with the rapid expansion of AI

AI For Business


The adoption and use of artificial intelligence poses challenges for risk management professionals seeking to keep pace with the pace of change while protecting their businesses and organizations from emerging technology-related threats.

From setting up systems to separate sensitive data (see related article below) to creating company-wide usage guidelines, organizations are taking a variety of steps to manage the risks posed by AI deployments.

Regulation is also seen as a major potential risk as governments around the world work to support and manage AI adoption (see related article below).

John Farley, managing director of New York-based Arthur J. Gallagher & Company’s cyber practice, said brokers need to stay ahead of new risks.

“This is happening very quickly. Threats are emerging as we speak. Regulation is doing the same. The insurance industry is evolving…so risk management as a whole needs to be aware of that development,” he said.

This may require bringing together different expertise, Farley said.

“Think about the lawyers you’ll need for compliance. Think about the possibility that you’ll need the help of data scientists to truly understand the nature of these AI platforms. You’ll need specific, targeted expertise around AI risk management,” he said.

AI frameworks developed by the National Institute of Standards and Technology are a good place to start, he said. “The NIST framework is highly regarded as one of the frameworks for modeling security,” he said.

“NIST has come out with an artificial intelligence framework that is relatively comprehensive and very helpful in helping everyone recognize the need for provisioning and governance,” said Maria Long, chief underwriting officer at New York-based cyber insurer Resilience.

The NIST AI Risk Management Framework was released on January 26, 2023 to help manage risks associated with artificial intelligence. The Institute also published the NIST AI RMF Playbook and AI RMF Roadmap, and launched the Trusted and Responsible AI Resource Center on March 30, 2023.

Mr. Long also emphasized the need for diverse expertise. “You have to have the right stakeholders in place. That’s the data privacy officer, that’s the chief information security officer. It could also include stakeholders from different business units.”

Rob Malone, head of U.S. cyber at New York-based Axa XL, said some very large organizations are implementing governance frameworks and AI governance committees to drive how they use AI and the tools they use.

Malone said governance frameworks are typically drafted by legal teams with input from information security, risk management and compliance departments, and sometimes input from data science teams.

Will Lehman, global director of risk management at Bloomington, Ind.-based Cook Group and director of the Risk and Insurance Management Institute, said organizations have established cross-functional teams across privacy, legal, compliance and risk to implement technology with a focus on governance and enable responsible adoption at scale.

They are “formalizing AI governance through clear policies, defined use cases, approved tools, and data boundaries,” he said.

Rachel Sark, vice president of risk management at Waltham, Mass.-based Benchmark Senior Living, highlighted privacy as the company’s main concern, as the company maintains customer data in compliance with the U.S. Health Insurance Portability and Accountability Act. Such data is kept separate from AI tools, she said.

“Our health data is a major risk to our privacy and cybersecurity. As long as we keep it isolated from AI tools, I don’t think AI will lead to a dramatic increase in our internal risks,” she said.

“We adopted the first rudimentary tool in a controlled manner,” said Moulay Elalami, Benchmark’s senior vice president of information technology.

The number of people testing or using AI tools on Benchmark is known, and there is no free version of the software deployed. He said all were licensed and the license holders were known to Mr Elaramy.

Barry Perkins, chief operating officer of Zurich’s Chicago-based company, said the company centralizes the AI ​​tools available to employees in an “AI lounge” and said the tools were made available after a “multidimensional” evaluation that assessed factors such as data security.

At catastrophe modeler Karen Clark & ​​Co., staff can experiment with AI tools, but “no one uses them without going through a process” to vet and evaluate the technology, said Karen Clark, founder and president of the Boston-based company.

Clark said the AI ​​tools are used for individual unit operations such as database queries, but not within KCC’s codebase. “You can perform relatively simple or discrete tasks,” Clark said.

Industry stakeholders also recommend having clear and accessible AI policies for staff.

“We’re thrilled to be able to provide the most effective way to protect our customers,” said Greg Eskins, global cyber product leader at Miami-based Marsh Risk. Such a document can serve as a training guide for beginners and “clarify the do’s and don’ts, especially when it comes to using tools for business purposes,” he said.

Henry Gardener, Markel’s chief risk officer, said documenting AI governance is “very important.” “We need to make sure the policy is clear enough and the principles are simple enough for everyone to follow,” he said.

“We have this particular document called AI Use and Governance, which is a policy document that we keep accessible to all staff and outlines the process for the new tool,” said George Beattie, CFC head of innovation, based in London.

Training is also important, Gardener added. “What we try to do is make sure they get a lot of training.”

Staff will need to be trained on appropriate prompts when using AI, Long said.

Before an AI system’s product can be made available to the public, it must be reviewed or verified by humans.

“You need people who are properly trained and have the right skill set at all times,” said Resilience’s Long.

Lehman suggested the phrase “co-pilot, not autopilot.”

“One of the cornerstones of AI risk management would be to involve humans, otherwise you would be blindly trusting the output, which in my opinion is a big mistake,” said Gallagher’s Farley.


As the world races to adopt artificial intelligence, regulators are working to keep up with the pace of change.

Officials in Europe and the United States have begun to support the adoption of powerful new technologies by issuing guidelines for the adoption and deployment of AI.

According to the European Parliament’s website, the EU AI law was first proposed in April 2021 “to ensure better conditions for the development and use of this innovative technology.” This law took effect on August 1, 2024.

In the United States, states have begun developing and enacting regulations, with California, Colorado, New York, Utah, and Texas enacting legislation. Many more states are in various stages of implementation (see map), which in some ways mirrors the evolution of cyber-infringement regulation.

“This is very similar to what happened with cybersecurity regulation, where Europe passed the GDPR and then all the state laws passed. Something similar is starting to revolve around AI,” said John Farley, managing director of the cyber practice at New York-based Arthur J. Gallagher & Company.

He said California was the first state to pass a mandatory database notification law in 2003.

“We’ve seen 50 states follow suit for about 15 years, and now we’re pretty much seeing a similar situation play out around AI regulation state by state. California already has a law in place. I’m a student of history, and I believe it definitely rhymes if it doesn’t repeat itself,” Farley said.

Jeff Kulikowski, New York-based executive vice president and cyber and professional liability leader for Westfield Specialty, a subsidiary of Westfield Insurance, said regulatory developments are moving faster than expected.

“We’ve seen AI regulation start in Europe, but in the U.S. more state legislatures, Congress and government agencies are starting to focus on AI,” he said.

Henry Gardener, chief risk officer at Markel, said AI regulation is important and inevitable.

“The pace of adoption by regulators has increased because they recognize the need and are moving as quickly as possible to take prudent steps. The challenge for regulators is that the target continues to move,” he said.

Kulikowski said the proliferation and acceleration of AI tools is prompting regulatory action.

“People want clarity,” he said.

Westfield Specialty encourages policyholders to stay informed of regulatory developments and compliance. “We tell our insureds that they need to stay on top of the regulatory aspects to reduce their risk,” Kulikowski said.

Rob Malone, head of cyber at New York-based AXA XL, said policyholders are looking for compliance coverage. “What we get asked most often is the scope of regulation, the very broad scope of regulation,” Malone said.

Policyholders cite the EU’s AI law, but “they want other similar regulations governing the use of AI to be applied broadly and comprehensively. We are seeing much more than last year,” Malone said.


Effective data governance plays an essential role in managing exposures related to artificial intelligence, insurance industry officials say.

Companies must prioritize protecting sensitive information when implementing AI, they say.

“Data governance is at the center of everything, and this is something that should be in place when implementing any AI tool,” said Maria Long, chief underwriting officer at New York-based cyber insurer Resilience.

Long recommends assessing privacy exposure before using AI tools for work processes and efficiency.

“Part of data governance is knowing whether the AI ​​agents are using the data that is input to train their algorithms,” she said.

To maintain the confidentiality of customer data and comply with privacy regulations such as the European Union’s General Data Protection Regulation and numerous other laws, users must establish systems that keep data within their own organizations.

One way to accomplish this is to separate artificial intelligence systems, said Barry Perkins, chief operating officer at Chicago-based Zurich USA.

At Zurich, data is segmented, or ring-fenced, and is never shared outside the company. This includes highly regulated customer data, he added.

“We want to make sure that what we put into[the large language model]and what we get out of it stays within a Zurich-specific version and doesn’t leak out to our competitors or to the world,” Perkins said.

At CFC, employees “don’t put sensitive information into any AI systems or programs until they’re essentially enclosed within the company,” said George Beatty, CFC’s London-based head of innovation.

He said AI model vendors have commercial agreements that allow them to segregate data.

“We can create a confined data space where data doesn’t leak into the wider world, and that made sense for AI vendors because otherwise companies wouldn’t intervene,” Beattie said.

He said discussions about confidentiality could precede discussions about the capabilities of the technology.

Long said some AI platforms allow companies to opt out of collecting data used to train models. Such service levels are typically enterprise-level subscriptions, she said.



Source link