science
Newswise — When you use a trained model to make predictions, there is prediction uncertainty. A trained model is an artificial intelligence (AI) model that is modified to reflect new data. Uncertainty prediction involves estimating how likely a particular event is to occur when the model does not fully know the situation. Accurately quantifying uncertainty is especially important in safety-sensitive settings such as fusion energy devices. For example, researchers working on fusion energy have recently used learning models called AI/deep reinforcement learning (DRL) algorithms to control fusion plasma. Proper modeling of uncertainties in these models is critical to plasma control performance. However, assessing prediction uncertainty is difficult and requires investigating multiple sources of uncertainty. To help with this, researchers have developed the Uncertainty Toolbox. This software code toolbox provides tools to assess the quality of forecast uncertainty. This includes tools for metrics and data visualization, as well as ways to rebalance forecast uncertainty.
impact
The Uncertainty Toolbox is valuable not only for quantifying uncertainty, but also for researchers in machine learning and physical sciences. This is the most popular open source code repository on GitHub for uncertainty quantification and calibration. Its capabilities have contributed to many applications. For example, the toolbox has supported research into new algorithms in calibration, model-based reinforcement learning, and uncertainty quantification applications in the physical sciences. The research community is contributing to maintaining and updating the toolbox, and its impact is expected to continue to grow.
summary
Prediction uncertainty is an inherent challenge when deploying AI and machine learning algorithms, especially in safety-critical applications such as plasma control. The Uncertainty Toolbox is a practical tool in this area. The features of this toolbox include implementations of evaluation metrics (check scores, likelihood scores, interval scores, average calibration, adversarial group calibration, etc.), visualization features (prediction intervals, reliability diagrams, etc.), and recalibration algorithms. This allows researchers to conduct a thorough investigation of various aspects of prediction uncertainty.
The importance of this toolbox extends beyond uncertainty quantification, as it contributes to research not only in the physical sciences but also in machine learning in general. As one of the leading open source repositories on GitHub for uncertainty quantification and calibration, it is widely used by the research community and maintained by community contributors. Its impact in further accelerating research into uncertainty quantification and its applications is expected to continue and expand as new algorithms are continually added to the toolbox.
Journal link: NeurIPS 2023 Workshop on Adaptive Experiment Design and Active Learning in the Real World (2023)
