Agentic AI, artificial intelligence and machine learning, next generation technology and secure development
The Economist’s study reveals gap between AI optimism and actual returns
Jennifer Rawinski •
May 14, 2026

Optimism about artificial intelligence is growing, even if the sentiment outweighs the evidence, according to a recent survey from Economist Enterprise.
See also: AI agent introduces new insider threat model
Four out of five executives say their company’s AI programs are exceeding expectations, but fewer than half actually track whether this is true, the report found. Economist Enterprise surveyed more than 1,200 senior technology executives from 18 countries, including 296 CIOs, and conducted qualitative interviews with technology leaders from companies including Disney, Mercedes-Benz, Nasdaq, Atlassian, Takeda, and more.
“Everyone is jumping on the bandwagon of being ahead of AI because boards expect it,” said study leader Eddy Mileb. “The industry is generally pretty hyped on this subject, but that’s not always the case.”
This report introduces a benchmarking framework to assess which companies are reaping real AI benefits from those stuck in pilot purgatory for their strategy, technology infrastructure, governance, and workforce transformation.
Among companies recognized as AI leaders, 84% said they had better-than-expected revenue, but only 43% said their teams needed to measure business impact. This chasm is also evident in a survey of strategically-minded CTOs and VPs who work close to technology. Almost 90% of CTOs said AI adoption is ahead of schedule, but only three in four senior vice presidents agreed. In IT, three in five C-level technology leaders say AI is fully embedded at scale, and two in five vice presidents agree.
The survey also found that companies still struggle to move AI pilot projects into production. 58% of companies still have a 7-12 month timeline and only 40% have an established AI development lifecycle.
“Process is the key word here,” Milev says. “A surprising and worrying number of companies say they have a framework but haven’t fully applied it, or they simply don’t have a framework developed for the entire lifecycle of an AI project.”
When it comes to data governance, the study found that 97% of companies with unified data architectures said they were achieving ROI ahead of schedule. For companies without a unified data architecture, that number dropped to 77%. Additionally, 59% of respondents identified data storage, movement, and replication as their largest ongoing AI cost. In contrast, only 25% mentioned infrastructure and computing costs.
“When we ask companies what they are most concerned about, the biggest percentages say that it is storing, moving and replicating data,” Milev said. “This is always a top cost-saving item because it’s hidden. CIOs should really think about this because it’s strictly within their limits.”
Feedback from management supports the data. “We’ve worked to dispose of 99% of legacy and fragmented data. This allows AI to answer questions more clearly and makes the insights of our AI agents more valuable,” Tal Saraf, Atlassian’s senior vice president of engineering and CIO, told The Economist Enterprise.
Although the use of clean data leads to better results, governance is not applied consistently throughout the project lifecycle. Approximately 59% of respondents said they conduct security reviews during development and before deployment, but only 39% continue to do so after the system is in production. One in eight people admit that they only review governance if something goes wrong.
Milev said this manual approach is driven by the way companies have historically deployed other types of enterprise software, which is not a process well-suited for AI.
“Companies often approach AI the same way they approach traditional enterprise systems, where they don’t really change when you deploy them. But AI is a constantly evolving technology, and when you use it, the behavior of the AI model can actually change,” he said.
Governance around agents is also a patchwork system that companies are beginning to consider. Three out of five leading AI adopters say autonomous agents are now doing the actual work, but less than half have mandated a formal governance framework.
The report identified what it calls the “deadly triad” of agent security risks: agents have access to untrusted external content, sensitive corporate data, and external communication capabilities. This creates problems because large language models cannot distinguish between underlying data and instructions, leaving them vulnerable to malicious commands hidden in the text.
“It gets to the heart of what agents need to work well and what makes them a risk,” Milev said. “You need to be able to connect to other enterprise systems and access a wide range of data, but you also need to be able to spy on the web. And the convergence of these two realities isn’t always pretty.”
He said major companies are building systems to avoid this risk, including implementing AI gateways, giving business owners kill switch privileges, and giving monitoring agents authority over other agents.
What was consistent across the interviews, although not fully captured in the survey data, was the question of how culture determines the success of AI projects, Milev said.
The report states that task-level job redesign, meaningful training, and appropriate incentives are more important than increasing the sophistication of AI systems. At the same time, half of respondents said human review is the highest ongoing AI cost, while only 4% said upskilling employees is a significant expense.
“You can have the processes in place and get the technology right, but without getting the culture right, it’s not enough,” Chas Murphy, Disney’s senior vice president of direct-to-consumer data and analytics, said in the report.
