Okta releases Businesses at Work 2026 report in Australia, making Okta for AI agents generally available.
The findings suggest that Australian organizations are deploying AI agents faster than they can secure them.
The report points to the proliferation of non-human identities within enterprise systems, including AI agents, bots, and service accounts. In some environments, these IDs outnumber human users by as much as 45 to 1.
Security capabilities have not kept pace with this growth. Just 10% of organizations say their identity systems are fully equipped to manage and protect non-human identities, and 41% say they don’t have a single team responsible for AI security.
This number suggests that many companies are moving AI tools into daily operations before establishing clear governance for risk ownership. It also shows that identity and access management will become central issues as AI systems gain access to data, applications, and workflows.
control gap
The rapid adoption of AI agents is widening the gap between deployment and monitoring. Unlike traditional software tools, AI agents can interact with multiple applications, use sensitive data, and perform actions across systems without continuous human supervision.
This change is creating new categories of identity risk, as many agents are not tracked or managed in the same way as employees and contractors. This problem is further exacerbated by shadow AI, where tools and agents are used without formal authorization or visibility.
Industry research cited in the report suggests that this issue is already surfacing in security incidents. Approximately 88% of organizations reported confirmed or suspected security incidents related to AI agents, but only 22% said their identities were associated with those agents.
Mike Reddie, vice president and ANZ general manager at Okta, said organizations were now facing management issues rather than simple implementation challenges.
“As organizations move from experimenting with AI to embedding it into daily operations, the challenge is no longer about adoption, but control,” said Reddie.
“AI agents are effectively becoming the new workforce. Without visibility and control over these identities, organizations risk introducing new security gaps at scale.”
Reddie said AI security is not an entirely new type of threat, but is tied to identity systems.
“Rather than creating new security problems, AI amplifies existing security problems of identity. If organizations want to securely scale AI, they need to start with visibility, access control, and governance, which gives them control at scale.”
new products
Alongside this report, Okta for AI Agents is now generally available. This product is designed to help organizations manage AI agents as an identity across cloud platforms, software applications, and AI frameworks.
Okta described the product as vendor-neutral and said it extends identity security controls beyond human users. It aims to help enterprises discover AI agents, bring them under control, and apply rules governing their access and activity.
The product is built around three questions: where can AI agents be, what can they connect to, and what can they do? In practice, this means identifying agents across cloud, SaaS, and custom environments, setting limited, short-term credentials, and managing agent actions through authorization, audit trails, and revocation controls.
It also aims to address unmanaged deployments by detecting shadow AI and allowing organizations to deactivate agents or revoke access if necessary. This becomes even more important as companies implement autonomous or semi-autonomous software into internal workflows, customer service, and operational processes.
Focus on security
The findings add to the broader debate about whether companies have sufficient basic security controls in place before deploying generative AI and agent-based systems. As organizations increase the number of non-human identities in their systems, challenges extend beyond authentication to governance, accountability, and lifecycle management.
For Australian organizations, the report suggests that despite the growing use of AI, many are still in the early stages of protecting these identities. The result is a mismatch between deployment size and monitoring maturity, especially when no single security or technology team has clear responsibility.
Okta argues that identity systems should become the primary layer of control for both human and non-human users. In this model, AI agents are treated as first-class identities, subject to the same visibility, access restrictions, and governance standards as other actors in your organization.
According to Okta, these controls help reduce excessive access, improve visibility into system activity, and manage the entire lifecycle of AI agents, including access revocation and real-time intervention.
