Risks of an AI arms race

AI For Business


Unlock Editor's Digest for free

Prabhakar Raghavan, head of search at Google, was preparing to launch his long-awaited artificial intelligence chatbot in Paris last February when he received some unpleasant news.

Two days earlier, the company's CEO Sundar Pichai boasted that Bard's chatbot “leverages information from the web to provide fresh, high-quality responses.” But within hours of Google posting a short gif video showing his Bard in action on his Twitter, observers noticed that the bot had given the wrong answer.

Bird's answer to “Can you tell your 9-year-old about the new discovery from the James Webb Space Telescope (JWST)?” That telescope has taken the first photo of a planet outside of Earth's solar system. That was it. In fact, these images were produced nearly 20 years earlier by the European Southern Observatory's Very Large Telescope. It was a mistake that damaged Mr. Bird's credibility and wiped out the $100 billion market capitalization of Google's parent company, Alphabet.

This incident highlighted the dangers of a high-pressure arms race over AI. It has the potential to improve accuracy, efficiency, and decision-making. However, while developers are expected to have clear boundaries about what they do and act responsibly when bringing their technology to market, they also want to prioritize profit over reliability. I am tempted to do so.

The origins of the AI ​​arms race date back to 2019. At the time, Microsoft CEO Satyanadera said Google's AI-powered autocomplete feature in his Gmail had become so effective that his company was left behind in his AI field. I realized that I was at risk of being exposed. development.

try yourself

This article is part of a collection “Instant Teaching Case Study”Explore business challenges. Read the article and think about the questions at the end.

About the author: David de Cremer is the Dunton Family Dean and professor of management and technology at Northeastern University's D'Amore-McKim School of Business in Boston. He is the author of The AI-Savvy Leader: 9 states to take back control and make AI work (Harvard Business Review Press, 2024).

OpenAI, a technology startup, needed outside capital to secure additional computing resources, which presented an opportunity. Nadella quietly made his first $1 billion investment. He believed that by working together, Microsoft could commercialize OpenAI's future discoveries and “dance” Google into a dominant market share. He was quickly proven right.

Microsoft's quick integration of OpenAI's ChatGPT into Bing marked a strategic coup and projected an image of technological superiority over Google. In order to keep up, Google rushed to release its own chatbot, even though Bard knew it wasn't ready to compete with his ChatGPT. As a result of that hasty mistake, Alphabet's market capitalization reached $100 billion.

The current prevailing practice in the technology industry seems to be a myopic fixation on the pioneers of increasingly sophisticated AI software. Fear of missing out causes companies to rush unfinished products to market, ignoring the inherent risks and costs.For example, metaerecently confirmed its intention to double down on the AI ​​arms race despite rising costs and a nearly 12% drop in stock price.

There appears to be a significant lack of purpose-driven initiatives, with profits more important than considerations for social welfare. Tesla, for example, rushed to launch AI-based “fully self-driving” (FSD) capabilities, but the technology is far from the maturity needed for safe deployment on roads. FSD due to driver negligence has been linked to hundreds of crashes and dozens of fatalities.

As a result, Tesla had to recall over 2 million vehicles due to FSD/Autopilot issues. Regulators claim Tesla did not make the proposed changes part of the recall, despite identifying concerns about drivers' ability to revert required software updates.

Compounding the problem is the prevalence of substandard and “so-so technology.” For example, two new GenAI-based portable gadgets, the Rabbit R1 and the Humane AI Pin, have sparked backlash for being unusable, too expensive, and not solving any meaningful problems.

Unfortunately, this trend continues unabated. Driven by the desire to take advantage of ChatGPT's incremental improvements as soon as possible, some startups are rushing to launch “so-so” GenAI-based hardware devices. They seem to have little interest in whether a market exists or not. The goal seems to be to win every AI race available, regardless of whether it adds value to the end user. In response, OpenAI warns startups to stop engaging in opportunistic, short-term strategies that pursue aimless innovation, making it easy to replicate the GPT-based apps startups are launching. He pointed out that a more powerful version of ChatGPT is coming. .

In response, governments are preparing regulations governing the development and deployment of AI. Some technology companies are responding with greater responsibility. A recent open letter signed by industry leaders supports the idea that “it is our collective responsibility to make choices that maximize the benefits and reduce the risks of AI for current and future generations.” It has been.

As the technology industry grapples with the ethical and social implications of widespread AI, some consultants, clients, and external groups are advocating for purpose-driven innovation. While regulators provide ostensible oversight, progress will require industry stakeholders to take responsibility for fostering an ecosystem that better prioritizes social welfare.

questions for discussion

  • Are technology companies responsible for the way companies deploy artificial intelligence, perhaps in wrong and unethical ways?

  • What strategies can high-tech companies follow to keep purpose at the center and see profit as an outcome of purpose?

  • Should bringing AI to market be more regulated? If so, how?

  • How do you predict the race to the bottom trend will play out over the next 5-10 years for AI-powered businesses? Which factors are most important?

  • What are the risks for businesses of not participating in the race to the bottom in AI development? How can these risks be managed by adopting a more purpose-driven strategy? • What factors are important in that scenario?



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *