Explainability, Interpretability, and Observability in Machine Learning | By Jason Zhong | June 2024

Machine Learning

There is no standard definition of explainability, but in general “Campaigns, initiatives and efforts made in response to concerns about the transparency and trustworthiness of AI.” (Adadi & Berrada, 2018). Bibal et al. (2021) aim to create guidelines for legal requirements and state that an explainable model is “(i) [provide] (ii) the main characteristics used in making the decision; [provide] (iii) all the features processed; [provide] A comprehensive explanation of the decision; and (iv) [provide] They define explainability as providing “meaningful insight into how a particular decision is made,” which requires “a train of thought that makes sense to the user (i.e., a train of thought that makes the decision meaningful to the user).” Thus, explainability is: Understand the internal logic and mechanisms of the models that underpin decision-making.

A historical example of explainability is the Go game between the algorithm AlphaGo and Lee Sedol, considered one of the best Go players of all time. In the second game, AlphaGo's 19th move was widely praised by experts and developers as “very surprising.” [overturning] “We are throwing away hundreds of years of wisdom” (Koppey, 2018). This movement is extremelyInhuman' was the decisive move that ultimately enabled the algorithm to win the game. While humans were then able to determine the motivation behind that move, they lacked an internal understanding of the model's logic and were therefore unable to explain why the model chose that move compared to other moves. This demonstrates that machine learning has an extraordinary ability to make calculations that far exceed human capabilities, but it also raises questions such as: Is this enough to blindly trust their decisions?

While accuracy is a key factor in machine learning adoption, explainability is often prioritized over accuracy.

Doctors are unwilling to accept a model that outputs the result that a cancerous tumor should not be removed, even if it would be a better outcome for the patient in the long run, if the model cannot generate the internal logic behind the decision. This is understandable, and is one of the main limiting factors why machine learning is underutilized in many fields, despite its great potential.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *