3 factors that could put your organization at AI risk

AI For Business


3. Understand the full range of risks

Consider how AI risk manifests itself in your business. How do you spot unfair outcomes that aren’t obvious?

For example, we excluded gender from datasets used in AI models, believing that the risk of gender bias was eliminated.[完了]You can check the box. But will the model still have access to the first name? Has anyone considered that the first name could be used as a surrogate for gender?

“Chain” risks must also be considered. It’s becoming increasingly common for AI models to be chained in sequences, where the output of one model is used as input for another model. For example, suppose you accept a 3% error rate and use a model that produces results that are considered accurate 97% of the time. But what happens when multiple models with similar tolerances are chained together? The chain of errors can grow rapidly, especially if the first model in the sequence starts rolling the ball at subsequent models in the wrong direction. There is a nature.

So what should organizational leaders do?

“As organizations continue to discover, deploy, and scale new use cases leveraging artificial intelligence (including generative AI), the need for trust and confidence in results is paramount. This drives better outcomes. This can be achieved by adopting a “responsible AI” framework for said Martin Sokalski, his KPMG Principal in Chicago, who works at his KPMG Lighthouse, which offers a suite of advanced technology capabilities that drive organizational optimization and sustainable growth.

Responsible AI focuses on applying the right controls at the right time to drive AI innovation and improved control posture.

  • Appropriate controls for stages in the AI ​​lifecycle: Implement technology, data usage, privacy, and model risk control points once the model reaches the appropriate stage of development.
  • Controls commensurate with risk: Models moving into production are more risky than models in development, so controls are moved closer to production. Additionally, controls must be commensurate with the risks inherent in what is being built and what data is being used.
  • Automated Workflows: Maintain and enforce control posture through automated workflows to enforce consistent working methods and control points.
  • A safe zone for development: a controlled environment with quality-validated data sources approved for modeling use.
  • Nurture experiments: Allow seamless access to training environments and data for pre-approved use cases to facilitate model training (environment setup, onboarding, data access). As your experiment transitions from detection to delivery, allow additional process steps to be applied, such as log access and usage notifications.
  • Post-deployment monitoring and measurement: Gain visibility into model inventory, model and feature changes, model performance over time, and model and feature metadata through a robust set of measured model tagging and metrics. maintain.

By implementing a robust and responsible AI program, you can recognize and manage the risks associated with AI and predictive analytics models with the same weighting as other enterprise risks.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *