Is artificial intelligence the right technology for risk management?

Applications of AI


To minimize threats and maximize rewards, risk professionals are increasingly relying on artificial intelligence. But while AI is increasingly being used to identify patterns and behaviors that may indicate fraud or money laundering, even more controversial is the use of facial features to verify customer identities. , but widespread use of AI for risk management in institutions is limited.

But now, the release of AI chatbots such as ChatGPT, which use “natural language processing” to understand user prompts and generate text or computer code, is likely to transform risk management functions for financial services firms. .

Some experts believe AI will be used in most areas of financial risk management in the next decade, assessing new types of risk, exploring ways to mitigate risk, and automating and speeding up the work of risk officers. I believe it will be done. .

“The genie came out of the bottle,” says Andrew Schwartz, an analyst at Celent, a research and advisory group that specializes in financial services technology. He estimates that more than half of the largest financial institutions are now using AI to manage risk.

growing market

Conversational or “generative” AI technologies such as OpenAI’s ChatGPT and Google’s Bard can already analyze vast amounts of data contained in corporate documents, regulatory filings, stock quotes, news reports and social media. increase.

It could, for example, improve current methods for assessing credit risk, or more complex and realistic “stress tests” that simulate how financial firms might respond to adverse market and economic conditions. Schwartz says it could be useful for creating exercises. “It just gives us more information, and the more information we have, the deeper and theoretically we might be able to understand the risks.”

Some financial institutions are in the early stages of using generative AI as virtual assistants for risk officers, said Sudhir Pai, chief innovation officer for financial services at consulting firm Capgemini.

Such assistants match financial market and investment information and offer advice on strategies to reduce risk. “[An] AI assistants for risk managers can help us gain new insights about risk in less time,” he explains.

While financial institutions are typically reluctant to talk about early use of generative AI for risk management, Schwartz says financial institutions are scrutinizing the quality of data fed into AI systems and removing false data. It suggests that we may be working on an important issue:

Initially, large companies may focus on testing generative AI in areas of risk management where traditional AI is already widely used, such as crime detection, says Bain & Company Risk, Regulatory and Finance. says Maria Teresa Tejada, a partner specializing in consulting.

She says generative AI is a “game changer” for financial institutions, as it can capture and analyze not only large amounts of structured data such as spreadsheets, but also unstructured data such as legal contracts and call records. I believe.

“Banks can now better manage risk in real time,” says Tejada..

SteelEye, a maker of compliance software for financial institutions, has already tested ChatGPT with five customers. He created nine “prompts” that ChatGPT uses when analyzing client text communications for regulatory compliance purposes.

SteelEye copied and pasted the text of client communications, including email threads, WhatsApp messages, and Bloomberg chats, to see if ChatGPT identified suspicious communications and flagged them for further investigation. For example, they were asked to look for signs of possible insider trading activity.

SteelEye CEO Matt Smith said ChatGPT has proven effective in analyzing and identifying suspicious communications for further investigation by compliance and risk professionals. increase.

“Something that could take compliance professionals hours to scrutinize, [ChatGPT] Minutes or seconds,” he points out.

precision and bias

However, some have expressed concern that ChatGPT, which pulls in data from sources such as Twitter and Reddit, could generate false information and violate privacy.

Smith countered that ChatGPT is only being used as a tool and that compliance officers have the final say on whether to act on the information.

Still, it is questionable whether generative AI is the right technology for the risk management departments of highly regulated and inherently prudent financial institutions that need to carefully validate data and complex statistical models.

“ChatGPT is not the answer to risk management,” says Moutusi Sau, a financial services analyst at research firm Gartner.

One of the issues identified by the European Risk Management Council is that the complexity of ChatGPT and similar AI technologies can make it difficult for financial services firms to account for the system’s decisions. Such systems, whose results are unexplainable, are known as “black boxes” in AI parlance.

Developers of AI for risk management and their users need to have a clear understanding of data assumptions, weaknesses and limitations, the council suggests.

Regulatory questions

To make matters worse, regulatory approaches to AI differ around the world. In the United States, the White House recently met with tech executives to discuss the use of AI ahead of drafting guidelines. However, the EU and China have already drafted legislation to regulate AI applications. Meanwhile, in the UK, the competition watchdog has begun reviewing the AI ​​market.

So far, discussion of that regulation has focused on the individual’s right to privacy and protection from discrimination. However, regulation of AI in risk management may require a different approach so that broad principles can be translated into detailed guidance for risk personnel.

“My feeling is that regulators will work with what they have,” said Zayed Al Jamil, a partner in the technology group at law firm Clifford Chance.

“They wouldn’t say that [AI] It is prohibited [for risk management] Or be very prescriptive. . . I think existing regulations will be updated to take AI into account,” he says.

Despite these regulatory questions and questions about the reliability of generative AI in managing risk in financial services, many in the industry believe generative AI will become more commonplace. Some suggest that many aspects of risk management could be improved simply by automating data analysis.

Celent’s Schwartz remains “bull” on the potential of AI in financial institutions. “In the medium term, where do you think we will see significant growth? [AI tools] We can,” he says.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *