Enterprises are going all-in on agent AI, accelerating efforts even as it outpaces the control needed to manage it.
The gap between agent AI ambition and readiness is widening as organizations move from experimentation to production. The key question is no longer whether AI agents can automate work, but whether the data governance, observability, and identity foundations required to handle these autonomous systems at scale are in place. AI and data leaders believe they need to prove that every dollar spent on agent AI delivers value, while controlling costs and reducing operational risk as agent AI adoption increases.
Agentic AI reveals gaps in enterprise data readiness
The pressure is reflected in the research figures. According to Forrester Research, enterprises are entering the “hard hat” stage of AI, where cost control, governance, and operational reliability become more important than impressive demonstrations. We predict that 25% of all AI spending planned in 2026 will increase by 2027 as CFOs increase their pursuit of ROI. Meanwhile, Deloitte found in its most recent annual survey on enterprise AI adoption in August and September 2025 that 74% of enterprises plan to deploy agent AI at a moderate or large scale within two years, up from 23% at the time. However, only 21% of 3,235 respondents said their organization has a mature agent AI governance model.
“This is a different type of thing. It’s not the way software works that we’re used to,” said Jeff Pollard, vice president and principal analyst at Forrester Research. “The difference with agents is that we give them agency. That’s an important distinction, because this is the first time we’ve broadly introduced software into an environment that has an intention, a goal, and the ability to do something without being explicitly told what to do.”
Pollard said some agent AI risks are well known, such as leaking sensitive data. Other new risks are the risk that agents may take harmful actions because an attacker changes their objectives or because issues in an organization’s IT and data infrastructure cause performance drift.
A McKinsey study published in 2026 found that security, risk management, and governance concerns are among the most frequently cited barriers to expanding AI to include agent systems. AI security guidance group OWASP highlights goal hijacking, tool abuse, and identity and privilege abuse as core threats for autonomous systems in 2026.
When an agent goes beyond its intended scope, there can be serious, even catastrophic, consequences, such as disrupting business operations and creating safety risks in certain areas. As a result, for enterprise leaders, investing in agent-powered AI is more of a model decision than a data governance, observability, architecture, or identity and access management (IAM) decision. To limit risk, organizations need controls to keep agent AI safe, manageable, and resilient.
“The key here is limited autonomy,” said Adnan Masood, chief AI architect at IT consultancy UST.
That means controlling agents’ identities, limiting access to data and actions they can take, and monitoring their actions, he said.
Policy as a control layer for agent AI
Based on the risk to the organization and the controls needed to prevent it, Masoud said leaders must establish actions that agents can perform on their own and actions that require human approval. Decisions about acceptable agent AI use need to be codified in formal policies that guide where agents are used, how much autonomy they have, and what safeguards apply.
“We need to think about digital agents as workers and think about the policies surrounding them like humans,” Masoud explained.
Policy and governance capabilities are also becoming a purchasing requirement. IDC says organizations increasingly need an AI governance platform that provides a centralized inventory of AI systems and supports policy management, risk assessment, audit trails, and continuous monitoring across the lifecycle of traditional, generative, and agent AI models. In practice, this means defining where agents can act autonomously, where human approval is required, and what records of the AI system’s behavior must be kept for audit and compliance purposes.
How to track agent actions across data workflows
Pollard said it is impossible to govern agents without having data about the actions they take.
“We need full observability into the behavior, access to tools, access to data, the identities and tasks agents are operating on behalf of, and telemetry into the agent’s reasoning: why it acted the way it did, what steps it chose to take over from other agents,” he said. “You need data about what’s going on. And you need something to put on top of that to understand the agent’s intent.”
As a result, observability in AI systems means more than application uptime. This should include runtime environment logs, metrics, and traces, as well as decision telemetry, tool usage records, and business context signals that reveal when agents deviate or exhibit harmful behavior.
Treat agents as managed identities
Organizations with mature cybersecurity and data privacy practices typically have strong IAM programs that ensure only authorized employees and systems have access to corporate data and applications only when needed to do their jobs. According to Masood, enterprises need the same IAM controls for AI agents.
“You have to make sure that the actions that the agent is allowed to perform are the only actions that the agent will perform,” he said.
Masood also said that organizations should create short-lived access privileges for agents, meaning that access is only granted if the agent is authorized to complete a task as part of a workflow.
“Certification should not be forever,” he added.
According to OWASP, in addition to data misuse by agents, attackers can also exploit identity and privilege vulnerabilities. To prevent such incidents, we recommend using both task-based and time-limited privileges, as well as validating all privileged steps with a centralized policy engine and escalating critical actions for human approval. Deloitte emphasizes that automated decision-making should be auditable and integrated into existing governance processes, rather than being managed through informal or shadow controls.
Data architecture as a control point for agents
Siled data stores and static data warehouse models do not support secure, governable, and resilient agents, said Pablo Balarin, co-founder and virtual chief information security officer of cybersecurity services firm Balusian SL and member of ISACA’s Emerging Trends Working Group, an association of governance professionals.
Ballarin said this is why it’s important for organizations to move to a dynamic, entity-centric, managed data fabric architecture.
That’s the strategy at the University of St. Thomas in St. Paul, Minnesota. Jenna Zance, the university’s principal data and AI officer, said the university uses a centralized data lakehouse, data mesh architecture, and metadata tagging to support the use of agentic AI.
“It gives us the ability to govern,” she said. “We also keep the data and the business close to create data products, so when we talk about agent AI, we can keep the agent in a specific domain and the agent doesn’t have to access the entire database.”
From deployment to ongoing agent monitoring
IEEE senior member Arpita Soni says modern data architectures allow organizations to embed control and policy enforcement at the data and access layers. But organizations also need to continuously monitor their data environments and AI agents and analyze observability data to ensure controls and policy enforcement mechanisms are working as expected, Soni said.
“Everything an agent does needs to be monitored and tracked,” she said, adding that organizations also need to adjust their security information and event management systems to incorporate agent AI monitoring data and send alerts on issues such as model drift.
Organizations also need to monitor and use data from monitoring agents to perform audits when agents produce incorrect output.
The need to monitor agents is not theoretical. In 2022, Air Canada’s chatbot misrepresented its bereavement fare policy to customers, and the airline was later ordered to pay damages. This case established that companies can be held liable for false information distributed by AI systems working on their behalf.
Weak controls can expose companies to losses such as remediation costs, compensation claims, and reputational damage. Enhanced agent AI governance improves execution speed, data quality, and ROI.
Mary K. Pratt is an award-winning freelance journalist focused on covering corporate IT and cybersecurity management.
