
Credit: CC0 Public Domain
By now, we've all heard and read a lot about Artificial Intelligence (AI). We've probably used some of the myriad AI tools that are now available. For some, AI feels like a magic wand that can predict the future.
But AI isn't perfect: A New Zealand supermarket meal planner offers customers toxic recipes, a chatbot in New York City advises people to break the law, and Google's AI Overview encourages people to eat stones.
AI tools are inherently specific systems that address specific problems. With any AI system, expectations must be aligned with its capabilities, and a lot of this comes down to how the AI is built.
Let’s look at some of the inherent shortcomings of AI systems.
Troubles in the real world
One common problem with all AI systems is that they are not 100% accurate in real-world settings – for example, predictive AI systems are trained using past data points.
If the AI encounters something new that is not similar to its training data, it will most likely not be able to make the right decision.
As a hypothetical example, take a military aircraft equipped with an AI-powered autopilot system. This system works thanks to a training “knowledge base.” But AI is not a magic wand, it's just mathematical calculations. If an enemy creates an obstacle that the aircraft's AI cannot “see” because it is not included in the training data, it could have devastating consequences.
Unfortunately, there isn't much we can do about this problem other than training the AI for every situation we know about, which can sometimes be an insurmountable challenge.
Training data bias
You may have heard of AI making biased decisions. Bias typically occurs when you have imbalanced data. Simply put, this means that when training an AI system, you show it too many examples of one type of outcome and too few examples of another type of outcome.
Take the example of an AI system trained to predict the likelihood that a particular individual will commit a crime: if the crime data used to train the system contains mostly people from group A (say, a particular ethnicity) and very few people from group B, the system will not learn about both groups equally well.
As a result, predictions for group A make it seem like these people are more likely to commit crimes compared to people in group B. If the system is used uncritically, the presence of this bias can have serious ethical consequences.
Thankfully, developers can address this issue by “balancing” their datasets, which involves a variety of approaches including the use of synthetic data – computer-generated, labeled data built for testing and training AI, with built-in checks to prevent bias.
Being outdated
Another problem with AI can arise when it is trained “offline” and does not keep up with the trends of the problem it is meant to tackle.
A simple example would be an AI system developed to predict the daily temperature in a city, whose training data includes all the information about the historical temperature data for that location.
Suppose after the AI has been trained and deployed, a severe weather event disrupts normal weather patterns. The AI system making the predictions was trained on data that didn't include this disruption, and so the predictions become less and less accurate.
The solution to this problem is to train the AI ”online,” meaning that it is periodically presented with the latest temperature data when it uses it to predict temperatures.
Although this seems like a great solution, online training comes with some risks: You could let the AI system train itself with the latest data, but this could get out of control.
Essentially, this happens because of chaos theory. Put simply, most AI systems are sensitive to initial conditions. If we don't know what data the system will encounter, we don't know how to adjust the initial conditions to control potential instabilities in the future.
If the data is incorrect
In some cases, training data may not be fit for purpose – for example, it may not have the characteristics required for an AI system to perform the task it was trained to perform.
To take a very simplistic example, imagine an AI tool that identifies “tall” and “short” people. In your training data, should a person who is 170cm tall be labeled “tall” or “short”? If they are tall, what label should the system return when it encounters someone who is 169.5cm tall? (Perhaps the best solution is to add a “medium” label.)
The above may seem trivial, but if the AI system is involved in medical diagnosis, for example, issues with data labelling or poor quality datasets could be problematic.
This problem is not easy to solve. Identifying the relevant information requires a great deal of knowledge and experience. Involving subject matter experts in the data collection process is a good solution, as they can instruct developers on what types of data should be included.
As (future) users of AI and technology, it is important for all of us to be aware of these issues and broaden our perspective on AI and its predicted outcomes for various aspects of our lives.
Courtesy of The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.![]()
Quote: Opinion: AI is not a magic wand. AI has built-in problems that are hard to fix and dangerous (June 17, 2024) Retrieved June 17, 2024, from https://techxplore.com/news/2024-06-opinion-ai-magic-wand-built.html
This document is subject to copyright. It may not be reproduced without written permission, except for fair dealing for the purposes of personal study or research. The content is provided for informational purposes only.
