Improving the predictability of machine learning models

Machine Learning


Researcher at MIT We have developed a unique method that allows machine learning models to quantify the reliability of their predictions. This technique requires a manageable amount of new data and is significantly less computationally intensive than other techniques.

“No one really knows how to build conscious machines.” Stuart Russell

overview

Machine learning models help people solve complex problems. For example, it can spot diseases in medical photographs or spot obstacles on the road so that self-driving cars can avoid them. But machine learning models can be wrong, so people need to know when to believe the model’s predictions in high-risk situations.

Quantification of uncertainty This is one way to increase the reliability of your model. When the model makes a prediction, it is given a score that indicates whether the prediction is correct. Unfortunately, while it’s helpful to know how much uncertainty there is, most current methods constrain giving the whole model that ability. For example, training a model means giving it millions of examples of how to work to learn it. Retraining requires millions of new data inputs and large amounts of computational power.

uncertainty learning

Neural networks are known to be overconfident when using the output label distribution directly to generate uncertainty measures. Existing methods primarily address this issue by retraining the entire model to impose uncertainty quantification capabilities, allowing the learned model to simultaneously achieve targeted accuracy and uncertainty predictive performance. will do so. However, training a model from scratch is time consuming and may need to be more practical.

Researchers propose an efficient and computationally efficient unique Bayesian metamodel to enhance pretrained models with excellent uncertainty quantification capabilities. Additionally, the recommended method does not require additional training data. It is versatile enough to assess different uncertainties and quickly adapt to multiple application contexts, including out-of-domain data detection, misclassification detection, and reliable transfer learning.

Quantification of uncertainty

For uncertainty quantification, a machine learning model assigns a numerical score to each output. This reflects the model’s confidence in the accuracy of its predictions. However, building new models or retraining existing models to incorporate uncertainty quantification requires large amounts of data and expensive computation, which makes it impractical. Additionally, existing methods may have the unintended effect of reducing the accuracy of model predictions.

Data uncertainty is generated by corrupted data or improper labeling and can only be minimized by repairing datasets or collecting new data. Model uncertainty occurs when the model doesn’t know how to explain newly observed data, and in most cases can produce inaccurate predictions due to lack of similar training instances. This is a complex yet common problem when deploying models. In real-world scenarios, we often encounter data that differs from the training samples.

inspection

After the model produces a quantified uncertainty score, the user should still be confident in the accuracy of the score. Researchers typically verify model accuracy by creating a subset of the original training data and evaluating the model on this subset. They developed a new validation technique by adding noise to the validation set data. This unordered data is comparable to out-of-sample data that can cause model uncertainty. This chaotic dataset is used to assess the quantification of uncertainty by researchers.

Conclusion

The researchers show that the proposed metamodel approach is flexible and outperforms many representative image classification benchmarks in these applications. Furthermore, this technique will help researchers to effectively perform uncertainty quantification on more machine learning models, ultimately helping users decide when to trust their predictions. help.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *