The AI ​​hype cycle is getting in the way of companies

AI For Business


You might think that the news of “big breakthroughs in AI” will only fuel the adoption of machine learning (ML). If so. Even before the latest splash (notably OpenAI’s ChatGPT and other generative AI tools), the rich narrative of emerging omnipotent AI was already a major problem for applied ML. That’s because the buzzword “AI” has gone too far for most ML projects. It sets expectations too high and distracts from exactly how ML can improve business operations.

Most practical use cases for ML are designed to improve the efficiency of existing business operations and are innovated in fairly straightforward ways. Don’t let the glare emanating from this flashy technology obscure the simplicity of its basic responsibilities. The purpose of ML is to publish actionable predictions. Therefore, it is sometimes called like this. Predictive analytics. This means real value as long as you avoid the false hype of being “precise” like a digital crystal ball.

This functionality translates into tangible value in a straightforward manner. Forecasting drives millions of operational decisions. For example, by predicting which customers are most likely to cancel, businesses can incentivize those customers to continue. Also, card processors can predict which credit card transactions are fraudulent and block those transactions. Practical ML use cases will have the greatest impact on existing business operations, and advanced data science techniques applied in such projects ultimately boil down to ML and ML alone.

Here’s the problem. Most people think of ML as “AI”. This is a valid misconception. But “AI” suffers from a relentless and incurable ambiguity. It’s a blanket terminology that doesn’t consistently refer to a particular methodology or value proposition. Calling ML tools “AI” is an overstatement of what most ML business deployments actually do. In fact, when you call something “AI”, you can’t promise too much. This moniker evokes the concept of Artificial General Intelligence (AGI), software that can perform any intelligent task that a human can perform.

This exacerbates a serious problem in ML projects. ML projects often don’t focus on their value: how ML can make business processes more efficient. As a result, most ML projects fail to deliver value. In contrast, an ML project that puts a concrete operational goal at the forefront is more likely to reach that goal.

What does AI really mean?

“‘AI-powered’ is the meaningless equivalent of ‘all natural’ in technology.”

– Devin Caldeway, tech crunch

AI cannot escape AGI for two reasons. First, the term “AI” is commonly thrown around without it being clear whether or not they are referring to AGI. Narrow AI, a term that describes an ML introduction that is hands-on and focused in nature. Despite the significant differences, common rhetoric and software sales literature blurs the line between the two.

Second, there is no satisfactory way to define AI other than AGI. Defining “AI” as something other than AGI is a strange, but a research challenge in its own right. If it doesn’t mean AGI, it means nothing. Other proposed definitions either fail to qualify as “intelligent” in the ambitious spirit implied in “AI”, or fail to establish objective goals. We face this conundrum when trying to identify 1) the definition of “AI”, 2) the criteria by which a computer is considered “intelligent”, or 3) the performance benchmarks that qualify true AI. These three are identical.

The problem is the word “intelligence” itself. When used to describe a machine, it is mercilessly vague. If AI is meant to be a legitimate field, this is bad news. Engineering cannot pursue imprecise goals. If you can’t define it, you can’t build it. To develop a device, you have to be able to measure how good it is – how well it works and how close you are to your goals. That way you can see your progress and know when you finally succeed. developing it.

In its futile attempts to circumvent this dilemma, the industry is constantly performing what I call an AI-defined awkward dance. AI Shuffle. AI means a computer that does something smart (a circular definition). No, it’s machine-proven intelligence (even more circular if possible). Rather, it is a system that employs certain advanced methodologies such as ML, natural language processing, rule-based systems, speech recognition, computer vision, or other techniques that operate probabilistically (obviously, these methods Adopting one or more of the does not automatically work). qualify the system as intelligent).

But certainly if a machine looks human enough, if you can’t distinguish it from a human by interrogating it in a chat room, for example, then it will be considered intelligent. Turing test. However, the ability to deceive becomes an arbitrary and mobile target, as human subjects become smarter about its machinations over time. At most he passes the test once in any system. You have fooled us twice, shame on humanity. Another reason passing the Turing test is irrelevant is the limited value or usefulness of doing so. If AI could exist, it would surely be convenient.

What if we defined AI by its ability? increase. I’ve found that this definition doesn’t work either, because when a computer can do something, we tend to dwarf it. After all, computers can only manage well-understood and well-specified mechanical tasks. Beyond that, the work suddenly loses its appeal, and the computers that can do it don’t seem ‘intelligent’ after all, at least not as much as the term ‘AI’ purports. Once the computer mastered chess, there was little feeling that AI had “solved” it.

This paradox known as Effects of AI, tells us that even if it is possible, it is not intelligent. Constantly plagued by elusive goals, AI inadvertently becomes synonymous with “making computers do things that are too difficult for computers” – the impossible for humans. No destination is satisfying once you arrive. AI completely ignores definitions. Ironically, computer science pioneer Larry Tesler famously suggested that AI could be defined as “what machines haven’t done yet.”

Ironically, it was ML’s visible success that promoted AI in the first place. After all, improving measurable performance is supervised machine learning In a nutshell. Feedback from evaluating the system against benchmarks such as samples of labeled data leads to improvements in: In doing so, ML offers unprecedented value in a myriad of ways. As Harvard Business Review put it, it has earned the title of “the most important general-purpose technology of our time.” Above all, ML’s proven leap forward is fueling the AI ​​hype.

Make it all happen with artificial general intelligence

“I expect the 3rd AI winter to arrive within the next five years…When I graduated with my PhD, AI was literally a bad word in 91 AI and ML. No company would consider hiring someone to do it.”

–Usama Fayyad, speaking at Machine Learning Week on June 23, 2022

There is one way to overcome this definitional dilemma. It’s all about defining AI as his AGI, software that can perform any intelligent task that a human can perform. If this sci-fi goal were to be achieved, I think there would be a strong argument that it would be “intellectual”. And that is, if not realistic, at least a measurable goal in principle. For example, a developer could have tens of thousands of complex email requests that she might send to a virtual assistant, as many different instructions to warehouse workers as she would to a robot, for a series of her 1,000,000 tasks. You can benchmark your system with There’s even a one-paragraph summary of how the machine should be CEO and run a Fortune 500 company profitably.

AGI may have set clear goals, but they are otherworldly and unmanageable ambition. No one knows when that will be achieved.

Here is a typical ML project problem. Calling them “AI” conveys that they are built on technologies that are in the same realm as AGI and are actively moving in that direction. ML is accompanied by “AI”. It evokes epic narratives, raises expectations, and pitches real-world technology on unrealistic terms. This confuses decision makers and causes projects to stagnate left and right.

If it’s made of the same ingredients as AGI, it’s understandable that so many would want to get their hands on a piece of the AI ​​pie. AGI’s promise of wish fulfillment—a kind of ultimate power—is almost irresistible.

But there’s a better way. I would argue that it is realistic and already attractive enough. It’s about doing more effectively what we do as an organization, at scale. Most commercial ML projects aim to do just that. For them to have a higher chance of success, we need to get serious. Don’t buy “AI” or sell “AI” if your goal is to provide operational value. Say what you mean and mean what you say. If a technology consists of ML, let’s call it that.

Reports of the impending obsolescence of the human mind have been greatly exaggerated, implying a new era of AI disenchantment is near. And in the long term, as long as we continue to apply the term “AI” hyperbole, we will continue to have an AI winter. But if we soften the “AI” rhetoric, or otherwise distinguish between ML and AI, we will properly insulate ML as an industry from the next AI Winter. This includes resisting the temptation to ride the wave of hype, and making an awake decision-maker who seems to bow to his all-powerful altar of AI passive. includes not affirming Otherwise the danger is clear and exists. As the hype fades, overselling is debunked, and winter arrives, many of ML’s true value propositions are needlessly discarded along with myths, like babies with bath water. It will be.

This article is the result of the author’s achievements as Bicentennial Professor of Analytics at UVA Darden Business School.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *