
You may have heard the words “explainability” or “interpretability” mentioned several times alongside artificial intelligence (AI). I would like to analyze this concept to explain why it is important to your organization and offer some high-level strategies for increasing explainability when developing and deploying AI solutions.
explainability, explained
Explainability is just that. Any stakeholder in your organization can understand why a machine learning (ML) model arrived at a particular output given an input, or how the model arrived at a decision. In a general sense, it’s important to know what’s going on in your model.
Lack of explainability is often referred to as the “black box problem”, where ML models produce outputs based on given inputs, but the process is obscure. This can occur for several reasons, including:
- Users lack technical knowledge
- Use of low-quality datasets
- Model architecture does not fit dataset or task
- Model not developed and trained properly
When working with deep learning networks, the black box problem is also inherent to some extent in how the network self-tunes a large number of parameters to produce an output that matches the training data set. You can’t know all the ideal states of your parameters in advance, but that’s the magic of neural network training. When deploying a model, it’s important to have a high-level understanding of how the model works.
The challenge of the black box problem
The black box problem boils down to the difference between correlation and causation. Even if the model finds arbitrary correlations between inputs and outputs that result in “useful” outputs, we still want to know why certain inputs lead to certain outputs. . (i.e. causality). This is equally important for business applications of ML models, given that an organization needs to know why a particular prediction or classification was made in order to act on it.
Lack of explainability causes many problems for companies. Below are just a few.
- Customers lose confidence in your system. We’ve all received recommendations for YouTube videos and Netflix shows that the underlying AI algorithms thought we liked. Rather, I often wondered, “Why would I want that?” If that happens too often, customers can lose confidence in your product.
- Employees lose trust in the system or stop using it altogether. For example, many sales teams in established companies face this problem. I was recently talking to Morton, his salt sales manager, a salt company founded in the mid-19th century, and the company said their employees didn’t understand his AI, so they incorporated his AI into their workflow. I said I was against it. Without understanding, executives and employees would rather rely on intuition developed by experience. If your model consistently produces unstable and unexplainable results, or if no one knows how your model actually works, you’re going to face serious problems with internal deployments.
- Auditability and Compliance. Imagine a bank using AI to determine the size of loans to offer to customers. If a model erroneously relied on erroneous factors, such as someone’s ethnicity, when determining loan amounts, this would violate many anti-discrimination laws. While the lending scenario is largely solved, there are many new applications of AI that face similar problems. We need to understand how the model arrived at the decision to be compliant. Most current regulations are designed around addressing biases and unfairness in model results.
- Debug and guide intervention. When you enter the engine room of a large ship, you will see many instruments that show what is going on inside the intricate machines around you. Marine engineers use these gauges to monitor performance and make repairs as necessary. The same should be true for developing machine learning models.
- Difficulty in making business decisions. Without explainability, it becomes difficult to assess whether a model and its implementation meet business needs and what actions to take based on the output.
Explainability is key
It sounds a bit wishful thinking, but explainability can be built directly into AI systems. Recently, I attended his HIMSS 2023 Health Tech conference where one of his speakers presented a computer vision model that predicts whether spots on the skin are malignant or benign. Explainability is important in this application. Because not only are lives at stake, but doctors are often involved. Black boxes are not suitable for human intervention.
To solve this problem, the speaker’s team developed a method to create a relevance graph that shows how much each pixel was considered by the ML model when making decisions. Some of the pixels were bright pink. This means the model weighted this pixel in the final prediction. This is a basic example that enhances explainability. Doctors can use this association graph to see if the model made a mistake, perhaps by considering hair or tattoos in determining malignancy.
As such, explainability is the key to unlocking strong synergies between humans and AI, which (for now) remains the best strategy for companies adopting this technology.
Enterprise explainability
Given how important explainability is to companies adopting AI, here are some ways for companies to increase explainability.
- Establish an AI Governance Team and Ethics Committee, AI framework. These measures align organizational values throughout the AI development and deployment process.
- Choose your model wisely. Off topic, some models are easier to interpret than others. Decision trees, logistic regression, and linear regression are some of the simplest types of ML models and are very easy to understand. You don’t necessarily need a mega-model of over 100 billion parameters to deliver value to your organization.
- Feature selection. Carefully identify which features of the input data set should be considered when making predictions or classifications. Check the regulations on which features can be considered.
- visualization tools (Example: Relevance graph for the healthcare example above)
- benchmark model For prejudice and fairness
- Use synthetic or alternate datasets
- cross-cutting education. Ideally, everyone in your company who uses or deploys AI systems should be familiar with the basics. This will drive adoption within your company, empowering everyone to use AI to make better decisions. Despite all the buzz around AI replacing jobs, the most pressing concern for organizations is reskilling existing teams.
- please cooperate with Accelerated Economy AI & Hyper-Automation Top 10 Companies. These companies have survived it all. They support you every step of the way, offering everything from DIY platforms to excellent professional services.
final thoughts
The black box problem is not one you don’t want to face. But in most cases it is not difficult to solve. Lack of AI knowledge and savvy among internal stakeholders is typically the cause of slow internal adoption, poor business decisions, and biased and dysfunctional models. or
Explainability goes hand in hand with the democratization of AI. Even a decade ago, the development and use of AI was confined to data scientists, but today there is an explosion of low-code and turnkey AI solutions. But these products alone are not enough. All organizations now have a responsibility to upskill their teams on the effective use of AI.
Looking for real-world insights on artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation Channel.

