Why You Need to Look Inside the AI ​​Black Box

Machine Learning


The following essay is reprinted with permission conversationThe Conversation is an online publication covering the latest research.

The term “black box” reminds some of the recording devices on airplanes that help post-mortem when the unthinkable happens. Others remind me of a small theater with minimal equipment. But black box is also an important term in the world of artificial intelligence.

An AI black box refers to an AI system whose inner workings are invisible to the user. You can give the input and get the output, but you can’t inspect the code of the system or the logic that produced the output.

Machine learning is a major subset of artificial intelligence. This is the basis for generative AI systems such as ChatGPT and DALL-E 2. Machine learning has his three components: an algorithm or set of algorithms, training data, and a model. An algorithm is a series of steps. In machine learning, algorithms learn how to identify patterns after being trained on a large set of examples (training data). When a machine learning algorithm is trained, the result is a machine learning model. Models are what people use.

For example, a machine learning algorithm can be designed to identify patterns in images, and images of dogs can be used as training data. The resulting machine learning model will be Dogspotter. Take an image as input and get as output whether a set of pixels in the image represents a dog and where it is located.

Any of the three components of a machine learning system can be hidden or put in a black box. As is often the case, algorithms are publicly known, so putting them in a black box makes them less effective. Therefore, AI developers often put their models in black boxes to protect their intellectual property. Another of his approaches taken by software developers is to obfuscate the data used to train the model. That is, putting the training data into a black box.

The opposite of a black box is sometimes called a glass box. The AI ​​Glass Box is a system where algorithms, training data, and models are all visible to everyone. But researchers sometimes characterize even these aspects as black boxes.

That’s because researchers don’t fully understand how machine learning algorithms, especially deep learning algorithms, work. In the area of ​​explainable AI, we are working to develop algorithms that are not necessarily glass boxes, but that humans can understand better.

Why AI Black Boxes Matter

There are often good reasons to be wary of black box machine learning algorithms and models. Suppose a machine learning model diagnoses your health. Would you like the model to be a black box or a glass box? What about the doctor who prescribes your course of treatment? Perhaps she would like to know how the model came to that decision.

What if the machine learning model that determines if you’re eligible for a business loan from the bank turns you down? Wouldn’t you like to know why? If you do, you can more effectively challenge the decision or change the situation to increase your chances of getting your next loan.

Black boxes also have important implications for the security of software systems. For years, many people in the computing field believed that keeping software in a black box prevented hackers from inspecting it, and thus made it safe. This assumption has been largely proven wrong, as hackers can reverse engineer software—observing software behavior, building duplicates, and discovering vulnerabilities that can be exploited.

When software is in a glass box, software testers and well-meaning hackers can examine it and alert its creators to weaknesses, minimizing cyberattacks.

This article originally appeared in The Conversation. Please read the original article.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *