OpenAI wins the AI ​​computing race. Next is the business model.

AI For Business


In early December, Anthropic CEO Dario Amodei suggested that some rivals were overextending themselves in AI computing.

“Some players are YOLOing,” Amodei said, hinting that rival OpenAI has so many AI computing contracts that it may have trouble paying them.

But the balance is shifting as demand surges and the system becomes strained. While OpenAI’s aggressive capacity efforts are starting to look more realistic, Anthropic has faced outages and growing pains of its own, a reminder that having enough computing power can be just as important as building better models in the AI ​​race.

“Anthropic is in particularly bad shape right now, with a combination of genuine downtime and degraded service quality,” said Lawrence Jones, founding engineer and AI lead at Incident.io, which helps companies like Netflix and Etsy manage outages.

Altman once said that “computing is destiny,” and the current situation suggests that his aggressive push for large capacity was prescient. Anthropic has since signed its own large-scale computing deal, at least six months later than OpenAI.

“OpenAI is clearly ahead of the curve when it comes to computing,” said Peter Gostev, AI Capability Lead at Arena.ai.

Given this lead, it’s strange to read reports in The Information and other media outlets that Altman and CFO Sarah Friar are at odds over whether OpenAI has signed too many computing contracts. The Wall Street Journal followed suit in an article late Monday.

OpenAI dismissed the reported rift as “ridiculous” and released a statement saying Altman and Friar are in agreement on acquiring as much compute as possible.

Still, the economic pressures are real. The Journal also reported that OpenAI missed its revenue goals and is still short of ChatGPT’s goal of 1 billion weekly users. Without stronger revenue growth, it will be more difficult to fund expensive computing deals.

As seen recently with Anthropic, despite soaring revenues, it remains unclear how companies will benefit from cutting-edge AI. Both companies are currently incurring large losses.

That leaves major AI labs in a difficult position. They need more powerful models and infrastructure than ever to serve the world with speed and reliability. That requires huge computing resources. This is more than we can comfortably afford today.

optimization

One way forward is through optimization. That means improving the way we build and run models through better software, new technology, and redesigned hardware.

In a recent interview with technology blogger Ben Thompson, Altman pointed to OpenAI’s GPT-5.5 model. Although the cost per token is higher, the amount of tokens used to deliver results is much smaller (tokens are the basic unit of data processing in AI systems).

Altman said OpenAI is no longer a “token factory” but now an “intelligence factory.”

“We just want to provide as much intelligence as possible at the lowest price,” he added, noting that customers don’t care how that efficiency is achieved.

Jones expects these optimization efforts to begin to take effect over the next year. “That changes everything about how you model costs,” he says. “Over the next five years, the economics of training and servicing these models will change through both software and hardware.”

Such improvements could make current business models more sustainable.

up the stack

Another strategy is to move “up the stack” beyond selling raw model access.

Anthropic accomplishes this by building specialized tools for industries such as finance, law, design, security, and especially software development.

OpenAI is also diversifying, launching enterprise products through a partnership with Amazon’s cloud business, expanding its Codex coding tools, experimenting with advertising on ChatGPT, and developing consumer hardware.

“It would be logical to think that they see at least part of their future in these axes and are looking to capture customer relationships and product surface area rather than just selling tokens,” Jones said.

Better models, bigger questions

Meanwhile, new technological transformations are on the horizon. Nvidia’s large cluster of Blackwell GPUs is currently being used to train next-generation AI models expected in about six months. More powerful Vera Rubin GPUs are expected to arrive in late 2026.

AI experts, including Jones, believe these advances will result in models that are significantly more capable and cheaper to run. This could lead to new applications and more sustainable revenue streams.

It also explains why OpenAI and Anthropic are now competing for computing power. They don’t want to get caught up in something they’re not ready for when these more powerful models come along.

However, fundamental questions remain. Is there enough demand for much more powerful AI?

“Maybe most companies don’t actually need more intelligent models,” Jones says. “Alternatively, you could choose a cheaper model, but at that point you have to start thinking about what the more intelligent model is for.”

More advanced models may solve more difficult problems and enable entirely new products, but that likely means serving a different market.

“If you’re a CFO, I don’t think it’s clear that if you create a more intelligent model, the same people who are currently buying from you will want a more powerful model,” he added. “Anyone responsible for planning such a large investment would be nervous.”

For now, Altman’s big bet on computing looks increasingly justified. But the bigger challenge is not just building powerful AI, but understanding who will pay for it and why.

Sign up for BI’s Tech Memo newsletter here. Please contact us by email. abarr@businessinsider.com.