Google puts AI agents at the center of how companies make money

AI News



LAS VEGAS: Alphabet is deepening its push into enterprise software, hinting to investors at Google’s annual cloud conference that AI agents, human-like digital assistants, are a cornerstone of its strategy to monetize artificial intelligence.
At a three-day conference in Las Vegas that began Wednesday, CEO Sundar Pichai, Google Cloud chief Thomas Kurian, and other Google staff sought to position the company’s AI tools as production-ready infrastructure for enterprise customers that is emerging as the industry’s most reliable source of revenue.
“The experimental phase is over and the real challenge begins,” Kurian said in his opening keynote speech. Other top AI companies, such as OpenAI and Anthropic, have aggressively shifted resources to enterprise customers in recent months. In a pre-recorded video address during his keynote, Pichai reaffirmed Alphabet’s capital spending plans of $175 billion to $185 billion this year and said “just over half” of the company’s investment in computing power for machine learning will go toward its cloud business.
This computing infrastructure also powers other key parts of the Mountain View, Calif.-based company, including its AI division, Google DeepMind.
Google announced that it will unify its suite of AI products under the name “Gemini Enterprise.” Most notably, this includes a rebrand and increased capacity for Vertex AI, a tool that allows cloud customers to choose from a variety of AI models to use for business purposes.
Google also announced a set of new governance and security features for AI agents. Agents, powerful digital assistants that can plan, decide, and act autonomously, are a rapidly growing field that is raising concerns about safety, reliability, and monitoring.
“There is definitely a strategic shift happening as the models become more sophisticated,” Kurian told Reuters in an interview last week. Kurian said Vertex AI’s primary use case has recently been the sudden explosion in users building their own custom AI agents from “old-style machine learning.”
Google is trying to outdo both traditional cloud rivals and AI startups as pressure mounts to prove benefits from massive spending to create AI. Google Cloud, once seen as lagging behind rivals like Amazon and Microsoft, is gaining traction among enterprise customers, driven by a huge bet on AI and heavy investments over the years in data centers, custom chips, and networking equipment. Marcia Bray, a senior executive at GE Appliances and a Google customer, told Reuters last week that Google’s suite of tools and enterprise data already stored in Google Cloud have enabled the company’s logistics and sales teams to deploy AI more quickly than other products the company has tested.

new google chip
The company announced on Wednesday two new custom tensor processing units called TPU 8t and 8i.
“Both are designed and engineered end-to-end for the so-called agent era and the kind of unique requirements of agent-based solutions and applications,” Mark Lohmeyer, Google’s vice president and general manager of compute and AI infrastructure, said in an interview with Reuters.
Google designed the TPU 8t to train the large language models that power chatbots like Anthropic’s Claude. Google will install its training chips in pods of 9,600 chips that can be linked together and scaled to 134,000 chips, the company said. The company said that when combined with Google’s other technologies, it can string together 1 million chips for large-scale training needs.
Google’s TPU 8i is tailored for the type of computing required to generate immediate responses from AI agents, a process called inference. The company has increased the memory on the chip itself to achieve improved performance. Google said it has 80% better performance on fast inference tasks than the previous generation, called Ironwood.

Agent over coding
In addition to traditional enterprise providers and other hyperscalers, a new class of competitors in enterprise AI is rapidly emerging: model providers. So far, coding assistants and plugins that connect AI models to existing enterprise software have emerged as lucrative channels to recoup AI revenue and significant investments. After achieving early success on the raw strength of their models, OpenAI and Anthropic are now moving downstream, leveraging these models to integrate resources into applications that perform specialized tasks, such as agent-building tools.
While rivals are focusing on coding products, Google has given little spotlight to coding at its cloud conferences. In pre-recorded comments, Pichai said that 75% of all new code at Google is generated by AI, up from 50% last fall.
Kurian instead characterized the AI ​​battleground as a battleground defined by agents, governance and enterprise deployment, telling Reuters that some coding announcements will be postponed until the company’s I/O developer conference in May.
“Some people are writing code using models. They can use not only Gemini, but also other tools like Claude,” he said. “But in other cases, we have something unique. Our platform has capabilities that no one else offers.” A long-term bet to build a vast suite of its own products, from models to chips, rather than relying on third-party vendors, has given Google an edge over other large cloud providers.
This could help Google increase its overall cloud market share to 14% by the end of 2025, according to data from Synergy Research, but it remains behind rivals Amazon and Microsoft.



Source link