Much has been written about the high failure rate of AI projects. In an increasingly agile world, CIOs and their organizations naturally want to embrace the idea captured in the book’s title, Fail Fast, Learn Faster: move quickly, experiment, and learn along the way.
But too many organizations rush into AI without the foundation in place.
Before embarking on an AI journey, CIOs need to act like experienced mountaineers. This means establishing a strong basecamp with your trading partners, aligning on key business problems and opportunities to be solved, and preparing your organization for the climb ahead.
The reason is simple. Achieving value from AI (like any other major endeavor) requires discipline as well as speed. You’ll find that this discipline has success criteria, governance, and compliance defined from the beginning, and a clear strategy tied to clear business outcomes. Prioritization is important here. There’s always more AI use cases As such, CIOs need to focus on initiatives that are most likely to have measurable business impact, especially as software pricing becomes increasingly tied to cost reduction and labor replacement rates.
Equally important, CIOs must avoid the trap of endless pilots by ensuring a reliable expansion path for their chosen AI projects. If you don’t do that, pilots will just pile up without any real work.
Once this foundation is in place, organizations can move to pilots with calculated risks. In addition to testing risk, they can also be used to rethink business capabilities and processes, and sometimes, as futurist Linda Yates suggests, “unleash your inner unicorn.”
What really separates pilot from production?
Let’s dig into the constructs of project success and then look at the causes of high project failure rates.
Research at Dresner Advisory Services found three characteristics that distinguish projects that go from pilot to production.
-
Succeed with business intelligence (BI). This means that an organization’s data is industrialized, meaning it is consistent, controlled, usable at scale, and therefore AI-enabled.
-
Success in data science and machine learning. This means that optimization models for more complex cases already exist. agent AI And more importantly, because the organization has already mastered AI, less organizational learning is required to sell the value and cost of AI to the organization.
-
A data reader exists. We have senior data leaders in place with strong business relationships, making it easier to co-create the future of AI and prioritizing AI projects that are right for your business.
These weren’t nice to have. They determined whether the project would scale.
With this background in mind, I wanted to talk to a leading consultant who helps companies implement AI on a daily basis. What do they look at when working with clients? Vamsi Duvvuri is the AI and data leader at Ernst and Young. Davri pointed to the results of the firm’s latest EY Technology Pulse poll, asserting that “AI projects fail when speed outweighs structure.” Survey of 500 U.S. business leaders Working in the technology industry:
-
85% of respondents prioritize speed to market over extensive AI scrutiny.
-
52% of respondents reported that department-level AI initiatives are being conducted without formal oversight.
-
78% say the pace of adoption is outpacing their ability to manage risk.
This is frightening and reminds us of what CIOs were trying to avoid years ago: unvetted, unintegrated, and unprotected shadow IT. The difference now is that AI embeds these risks directly into the workflow and spreads them more quickly.
Worse, Duvvuri says, the problem extends beyond project prioritization and selection. In practice, he said, projects are often delayed due to weak governance, unclear ownership, poor data and many unconnected pilots. “The result is not that ambition has failed, but that value has stalled,” he said. “For example, a company launches multiple AI pilots to speed up the work of analysts, but the analysts are still reconciling data, managing complexity and noise, and stitching together decisions across those multiple pilot projects. Value appears briefly, but eventually plateaus.”
This interestingly ties back nicely to the three qualities we identified at the beginning of this section.
Why didn’t adding more pilots create more value?
According to our Dresner data, 15% of organizations operate using agent AI and 34% operate using some form of generative AI-based solution. We expect a total of 34% of organizations to have the three success criteria listed above: BI maturity, AI and machine learning skills, and strong data leaders.
Meanwhile, 34% of organizations are experimenting with agent AI. 53% said they are experimenting with generative AI. It’s surprising that these numbers aren’t even close, but it means IT organizations can deploy tactical generative AI solutions without modifying the underlying data or governance or thinking through business priorities.
Given this, the question remains: how do organizations create space for pilots that deliver strategic and measurable production value?
Clearly, we need to embed responsible AI into our operations. Professor Pedro Amorim advised that CIOs run venture-style portfolios, i.e., fund small bets with time limits, learn quickly, and multiply winners with a clear path to industrialization.
At the same time, he added, organizations “need to put basic guardrails in place early on (data classification, privacy/intellectual property rules, human participation for sensitive decisions, evaluation benchmarks, explicit prohibition criteria) and make sure they have the budget at the top of the funnel to avoid being forced to make one or two big bets.”
Smart experimentation therefore includes strong data integrity, built-in cybersecurity, and continuous monitoring for issues such as bias and model drift.
Trust is what makes AI sustainable. Transparency, governance, training, and clear human oversight are essential for employees to understand how AI works and that human judgment remains important.
“Smart experimentation means deciding where to place complexity, and it’s the CIO’s role to help agents absorb variability and orchestration while humans maintain judgment and critical decision-making,” Davri said.
In reality, this requires fewer, more disciplined experiments based on real-world workflows rather than isolated tasks. This is important because organizations need to act quickly. However, uncontrolled speed increases failures. For this reason, Coach Davri emphasized, “The problem is not momentum, but control.”
Rather than piloting AI to “assist” customer service representatives, he said companies should sponsor experiments where agents triage, resolve, and route cases end-to-end, escalating to humans only for exceptions, policy decisions, and customer empathy.
Successful pilots demonstrate not only accuracy but also maneuverability. “Smart experimentation requires an AI-native approach to software delivery,” he said.
Consider risk from day one
Our Dresdner research found that the main risks that CIOs and data leaders are concerned about include:
-
Data security/privacy concerns.
-
Quality/accuracy of responses.
-
Potential for unintended consequences.
-
Compliance with laws and regulations.
So how does a smart organization become a reality? Predict, assess, and mitigate AI risks From the beginning?
Successful organizations have CIOs who bring people across the organization together to co-create the necessary guardrails. It’s important to remember that minimizing risk does not mean slowing down innovation. It’s about alignment and shared purpose.
For this reason, Davri said, “Risks need to be designed in from day one. As AI accelerates actions, unmanaged use puts them at risk,” pointing to EY data showing that 45% of technology leaders report confirmed or suspected sensitive data breaches related to unauthorized generative AI use, and 39% report IP breaches.
It’s not a tool issue, it’s a design failure.
He said CIOs need to standardize on approved platforms, embed controls directly into workflows, and clearly define where agents act autonomously and where human intervention is required. Done right, governance can be an enabler of scale rather than a brake on innovation.
Duvvuri suggested that CIOs establish the following: Approved AI toolsreal-time monitoring of data and IP risks, and clear authority to stop non-compliant deployments.
“Teams will be able to move faster because safe behavior is built into the system, rather than forced after the fact. As intelligence becomes cheaper and more available, enterprises will not be simple by default. Winners will intentionally move complexity from humans to machines, while still firmly retaining judgment, trust, and accountability for humans,” he said.
Agile with discipline: Build the foundation first
CIOs need to apply agile principles to AI, but not without discipline. Organizations need a clear strategy tied to explicit business outcomes, with success criteria, governance, and compliance defined from the beginning. Data maturity and clearly defined guardrails are essential. This foundation enables smarter experimentation while considering risk from the beginning. More mature organizations have a head start as they have already addressed many of these challenges. For CIOs in less mature environments, the priorities are clear. Invest in the process and data capabilities you need for early success, then refine, scale, and industrialize your data and business processes.
