ai -unite.ai trust prioritization

Machine Learning


Society's reliance on artificial intelligence (AI) and machine learning (ML) applications continues to grow, redefineing the way information is consumed. From AI-powered chatbots to information synthesis generated from large-scale language models (LLM), society has access to more information and deeper insights than ever before. However, as technology companies compete to implement AI across their value chains, key questions loom in. Can you really trust the output of your AI solution?

Can you really trust the AI ​​output without quantifying uncertainty?

For certain inputs, it is possible that the model produced many other equally transparent outputs. This could be due to insufficient training data, variation in training data, or other causes. When deploying models, organizations can leverage uncertainty quantification to give their end users a clearer understanding of how much they should trust the output of their AI/ML models. Quantifying uncertainty is the process of estimating what the other outputs were.

Imagine a model that predicts the high temperatures tomorrow. The model may produce an output of 21ºC, but the quantification of uncertainty applied to that output may indicate that the model can produce 12ºC, 15ºC, or 16ºC. Knowing this, how much do you trust a simple prediction of 20°C? Despite the potential for creating trust and paying attention, many organizations have chosen to skip quantification of uncertainty due to the additional work required to implement it, as well as the demands on computing resources and inference speed.

Human loop systems such as medical diagnosis and prognosis involve humans as part of the decision-making process. By blindly trusting data in healthcare AI/ML solutions, health professionals risk patient misdiagnosis, potentially leading to standard health outcomes, or even worse. Quantification of uncertainty allows healthcare professionals to look quantitatively when they can be more reliable with AI output and when certain predictions need to be handled with caution. Similarly, in fully automated systems such as self-driving cars, the output of the model for estimating obstacle distances can lead to crashes that could have been avoided in the presence of quantification of uncertainty in distance estimates.

The challenge of using the Monte Carlo method to build trust with AI/ML models

The Monte Carlo method developed during the Manhattan project is a robust method for implementing uncertainty quantification. They involve a re-running algorithm that repeats with slightly different inputs until further iterations do not provide much information to the output. It is said that once the process reaches such a state, it converges. One drawback of the Monte Carlo method is that it is usually slow and computationally intensive, requiring many iterations of component calculations to obtain converged outputs and have inherent variability across those outputs. The Monte Carlo method uses the output of a random number generator as one of the important building blocks, so even if you run a Monte Carlo with many internal iterations, the results obtained will change when you repeat the process with the same parameters.

The path to reliability of AI/ML models

Unlike traditional servers and AI-specific accelerators, new types of computing platforms are being developed to directly handle empirical probability distributions, just as traditional computing platforms handle integers and floating point values. By deploying AI models on these platforms, organizations can automate implementation of uncertainty quantification in pre-trained models, and can also speed up other types of computing tasks that traditionally used Monte Carlo methods, such as VAR calculations for finance. In particular, for VAR scenarios, this new kind of platform allows organizations to manipulate empirical distributions that build directly from real market data, rather than approximating the samples generated by random number generators, for more accurate analysis and faster results.

Recent breakthroughs in computing have significantly reduced barriers to uncertainty quantification. A recent research article published by my colleagues and I in Machine Learning using the new Computing Paradigm Workshop at Neurips 2024 highlights that the development of a next-generation computing platform allows it to run more than 100 times faster on premium Intel Xeon-based servers compared to traditional Monte Carlo-based analytics. These advances allow organizations to deploy AI solutions to easily implement uncertainty quantification and implement low overhead uncertainty quantification.

The future of AI/ML reliability depends on advanced next-generation calculations

As organizations integrate more AI solutions into society, AI/ML reliability becomes a top priority. Companies cannot afford to skip implementation facilities in deploying AI models, allowing them to know when consumers will be skeptical of the output of a particular AI model. The demand for such explanability and quantification of uncertainty is clear, indicating that around three in four are hoping to be more willing to trust AI systems when the appropriate assurance mechanism is in place.

New computing technologies make uncertainty quantification easy to implement and deploy. Industry and regulatory bodies tackle other challenges associated with deploying AI in society, but by standardizing uncertainty, there is an opportunity to create at least the trust that humans need.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *