AFSL calls for self-regulation of AI use before “governance clearing”

Applications of AI


As AI becomes more pervasive in the advice sector, industry experts are encouraging AFSL to self-regulate its use and warning that the regulator is likely to launch an “AI governance aggregation” in the coming months.

According to Finura Group research, 86% of advice businesses are now using AI, in part due to the industry’s need to reduce operating costs amid rising regulatory costs.

Peter Warne, co-managing director of Finura Group, said at the annual review webinar that companies using AI need to build strong compliance and governance frameworks as governments and laws cannot keep up with rapidly evolving technology.

“Like all technology, it advances faster than our regulations and professional standards can keep up. Technology is just being nimble. We’re dealing with a federal government that doesn’t yet understand encryption. AI is no exception at all.”

“We are not even in a position to wait for governments to regulate how AI should or should not be used; technology is moving too fast.

“I personally and Finura are big supporters of self-regulation, so I think this is something that every AFSL, every company needs to make their own decisions about how they regulate and use AI for a variety of reasons, even just from a legal ethics perspective.”

This means that companies need to consider where they draw the line when it comes to their use with AI, while also considering how their clients will feel about the technology being used.

“There are two important questions to consider: what are you comfortable with as a business, how do you use AI, and what risks are you willing to take there? And what level of risk are you willing to take on behalf of your clients?” Warne said.

Meanwhile, AFSL may also want to get ahead of ASIC’s “AI governance reckoning” that Warne predicts this year, as regulators, unlike the government, can take a more nimble approach to clamping down on misuse of the technology in the advice sector.

“Our prediction for this year is that we’re going to see some AI enforcement activity. There’s going to be some kind of public AI-like failure in our advice industry this year, one way or another, and regulators are going to come down very hard.”

Tali Borowicz, a lawyer for Holy Nethercote, raised similar concerns. Analysis from earlier this monthhas warned that it is “only a matter of time” before ASIC takes action against licensees for data breaches and governance failures related to the use of AI.

“From stricter compliance obligations under new global standards to greater accountability for algorithmic decision-making, regulators will demand greater transparency, fairness and risk management,” Borowicz said.

“Companies that act now to align with these evolving requirements will not only reduce risk but also gain a competitive advantage in an increasingly regulated environment.

“The message is clear: 2026 will not just be the year of innovation, it will be the year of responsibility.”

Why is it so important to regulate the use of AI?

Because of how most AI, especially large-scale language models, are programmed, Warne suggested that some people may be lulled into a false sense of security by AI being friendly or seemingly helpful to users.

This becomes a problem when someone “hallucinates” and provides false information just to give the user what they want.

Adding to this is the tendency for people to overestimate their own skills and act with complex confidence when using AI, Warne said, which can lead to poor decision-making for those at the so-called “top of the idiot mountain.”

“We’ve experienced a lot of peaks in the last 12 months where we’ve had people very senior within organizations jump to big conclusions about what AI is going to do for them and assume that whatever AI gives them is right.”

At this point, the idea that deploying AI could be a way for companies to reduce operational costs and shed redundant headcount may be a mistake, especially for companies in financial services where compliance is so important.

“You need a lot of people in-house to validate the output of AI. I was talking to a very large licensee about that, and we were looking at whether we could save a lot of money with compliance and AI.”

“She made a great point. In the future, we’ll probably need to use those people to validate a lot of AI output with humans involved.”

For companies choosing to use AI, not only must cyber security risks be kept in mind, but there are also questions about how trustworthy the data sources the AI ​​is drawing on, which ultimately leads to integration issues.

“The main thing they lack is access to data, which is why we analyze it a little bit of risk, but for good reason. A lot of the data input that they’re getting is from meetings with clients and other documents that they feed in, but it’s not necessarily integrated with everything, so they’re not getting data from what we would call a reliable system of record.

“No doubt things will change, but the reality is that we’re going to see more and more of this because it’s so easy to start a software business right now and it’s so cheap to build simple applications using AI.

“Don’t get me wrong. Many, if not all, of these businesses, especially those that come from offshore, are you going to have a good time or are you going to have a long time? They know that it takes maybe two years to get a business that has a lot of market share and then sell it and exit and sell the rest to private equity.”



Source link