Compensatory Consequences of Unethical AI Whisperers

Applications of AI


Yes.. I’m talking about AI applications – our myriad of AI applications and upcoming AI applications that are whispering to humans about what to do. How to do it… but interacting with AI chatbots It’s not about the whisperer.

IDC forecasts that the global AI market could reach over $500 billion by 2024, an increase of more than 50% from 2021. This shows that business experimentation has led to the acceptance that AI is an integral part of corporate strategies of all sizes. The tools you need to turn data into insights and act on better decisions. No one is discussing AI’s benefits of reducing business risk and increasing his ROI through innovation. But as always, … but… unbiased AI is easier said than done.

These business-critical AI models must operate reliably with visibility and accountability. Otherwise, failure in this case will have disastrous consequences that affect the company’s cash flow and may even lead to legal issues. Can you prove your load is built ethically?” Automation and transparency that answer one question. aka… how do you rule? And can you prove that it is under continuous control?

This is where companies like IBM invest in AI governance to coordinate the overall process of directing, managing, and overseeing an organization’s AI activities. A key task is to ensure that all business units are actively engaged and infuse governance frameworks into initiatives to strengthen their ability to meet ethical principles and regulations. In particular, regulated industries such as banking and financial services are legally required to provide evidence to satisfy regulators.

Due to the tremendous pressure of digital transformation, the impact of AI is growing exponentially in the financial services sector. As mentioned earlier, this is easier said than done for the following reasons.

1. Operationalize AI apps with confidence.

In some cases, models are built without clarity and cataloging. Needless to say, monitoring disappears in the middle of everything to track the end-to-end lifecycle. While banks struggle with legacy applications, automating processes to achieve transparency and explainability has become difficult and black-boxed. No one knows why/how the decisions were made. A new app intertwined with legacy apps never sees the light of day, but has a huge ROI associated with it due to quality and unrecognized risk.

This brings us to our second point: managing reputational risk.

2. Manage reputation risk along with overall risk

I asked #chatGPT and #Bard – who is Padma Chukka. #ChatGPT refused to answer even after changing the question multiple times. Still, Bard gave me a detailed response including his LinkedIn profile…but the data came from various sites and my old profile still exists as part of the speaker’s bio. From that point on, I still haven’t opened the bard. In no time, I was turned off. That is reputational risk. Suppose you can turn off a simple chatbot when you notice that your data may be inconsistent. Why were you unsure before deciding to buy an AI-infused application to run your critical business? Reputational risk is a key factor that companies often forget. Quantifying reputational risk shows that lack of proactiveness can have a significant impact on your business.

Adding to the complexity is the third…

3. How can businesses respond to changing AI regulations?

To avoid reputational risk, successful and responsible AI teams must be aware of all local and global regulations and remove them like tick-tock videos at a moment’s notice. And non-compliance could ultimately cost organizations millions of dollars in fines related to work such as the proposed EU AI law. At most he could be €30 million, or 6% of the company’s global revenue – OUCH.

Well, everything doesn’t have to be rosy to begin with…as long as you know how to transform a scary situation into a rosy one.

Naturally, it’s always people, processes and technology. So, first, create a cross-functional governing body to educate, direct, and oversee the initiative based on purpose. Then benchmark your current AI technologies and processes, understand the gaps, and then fix them for future proof. Then rely on a set of automated governance workflows that align with your compliance requirements. Finally, set up a monitoring system to alert owners when acceptable thresholds are approaching. From a technical standpoint, well-designed, well-executed, and well-connected AI requires multiple building blocks. Also, make sure you have some or all of the functionality.

Data consistency across diverse deployments

・Use open and flexible existing tools that comply with AI governance

Make sure you provide self-service access with privacy controls – how to track

Designed with automation and AI governance in mind

Connect and customize multiple stakeholders through customizable workflows

Once you’ve transformed your app from scary to rosy…the next question is how to prove it…

First, it builds on the company’s AI principles. However, especially in a regulated environment like financial services, you have to “demonstrate” that you are compliant. Financial services must comply with NIST 800-53, so you can explore the NIST AI Risk Management Framework (AI RMF). NIST proposed he four families of controls: management, map, measurement, and management. Use that as a guideline to stress test your application and identify gaps to remediate and monitor.

IBM can validate models before they go into production and monitor fairness, quality and drift. We can also provide documentation describing model behavior and predictions to meet regulatory and auditor requirements. These explanations provide visibility, reduce audit pain, increase transparency, and enhance the ability to determine potential risks.

Listen to AI whispers with confidence!

#Financialservices #responsibleai #ethicalai #NISTAIRMF



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *