Explainable AI using expressive Boolean expressions

Applications of AI


The explosion of artificial intelligence (AI) and machine learning applications is permeating nearly every industry and part of life.

But that growth doesn’t come without irony. AI exists to simplify and/or accelerate decision-making and workflows, but the methodologies for doing so are often very complex. In fact, some “black box” machine learning algorithms are so complex and multifaceted that even the computer scientists who created them can make simple explanations impossible.

This becomes very problematic when certain use cases, such as those in the financial and healthcare sectors, are defined by industry best practices and government regulations that require a transparent explanation of the inner workings of AI solutions. may become. Also, if these applications are not expressive enough to meet explainability requirements, they may become useless regardless of their overall effectiveness.

To address this challenge, the Fidelity Center for Applied Technology (FCAT) team collaborated with the Amazon Quantum Solutions Lab to develop interpretable machine learning for Explainable AI (XAI) based on expressive Boolean expressions. I proposed and implemented a model. Such an approach can contain arbitrary operators that can be applied to one or more boolean variables, making it more expressive compared to more rigid rule-based or tree-based approaches. .

For comprehensive details on this project, you can read the full story here.

Our hypothesis is that models such as decision trees can become deep and difficult to interpret, so we need to find representation rules with low complexity but high accuracy, leading to difficult optimizations that need to be solved. was that it was a problem. Additionally, simplifying the model through this advanced XAI approach provides additional benefits, such as revealing important biases in terms of ethical and responsible use of ML. It also makes the model easier to maintain and improve.

We proposed an approach based on expressive Boolean expressions. This is because Boolean expressions define rules with adjustable complexity (or interpretability) depending on the classification of the input data. Such expressions can contain arbitrary operators that can be applied to one or more Boolean variables (such as And and AtLeast), resulting in higher representation compared to more rigorous rule-based or tree-based methodologies You get power.

There are two competing goals in this problem: minimizing algorithmic complexity while maximizing algorithmic performance. Therefore, rather than taking the general approach of applying one of two optimization techniques, combining multiple goals into one or constraining one of the goals, we include both in the formulation. I chose to In doing so, we primarily use balanced accuracy as an overarching performance metric, without loss of generality.

We were also motivated by the idea of ​​incorporating operators like AtLeast to address the need for highly interpretable checklists, such as lists of medical symptoms that indicate certain conditions. Using a checklist of such symptoms, it is conceivable that decisions are made in such a way that a minimum number must be present for a positive diagnosis. Similarly, in finance, a bank may decide whether to offer credit to a customer based on the presence of a certain number of elements from a larger list.

We have successfully implemented the XAI model and benchmarked it on several public datasets on credit, customer behavior and medical conditions. We found that our model was generally competitive with other well-known alternatives. We also found that XAI models could potentially utilize dedicated hardware or quantum devices to solve fast integer linear programming (ILP) or quadratic unconstrained binary optimization (QUBO). rice field. The addition of the QUBO solver reduces the number of iterations, leading to faster suggestions for non-local motion.

As mentioned earlier, explainable AI models using Boolean expressions can be used in the financial sector of healthcare and fidelity (such as credit scoring and explaining why some customers chose a product and others did not). evaluation, etc.) and many applications are possible. By creating these interpretable rules, you can gain a higher level of insight that will lead to future improvements in product development and refinement as well as optimizing your marketing campaigns.

Based on our findings, we determined that Explainable AI using expressive Boolean expressions is appropriate and desirable for use cases that require more explainability. Additionally, as quantum computing continues to develop, we anticipate potential speed improvements from the use of quantum computing and other special-purpose hardware accelerators.

Future work may focus on applying these classifiers to other datasets, introducing new operators, or applying these concepts to other use cases.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *