Businesses across the UK suffer from a “significant blind spot” in what employees share with AI applications, according to new research.
More than two-thirds (67%) of organizations fail to account for the information their staff is sharing with AI platforms and large-scale language models (LLMs), according to SailPoint research.
To make matters worse, the study found that 35% of respondents admitted to sharing data through external tools rather than approved internal applications, creating a range of risks for businesses.
The rise of “shadow AI” has been a recurring problem for organizations over the past two years. Research shows that when employees use unauthorized applications, they risk exposing sensitive company data, and this trend shows no signs of slowing down.
Gartner findings A November 2025 study predicts that by 2030, 40% of businesses will suffer a data breach due to shadow AI.
SailPoint noted that the shadow AI trend is growing, even though many companies are making significant investments in data management and AI capabilities for their employees.
More than four in five respondents (82%) say they are investing in additional staffing and skills training to help employees better manage AI applications, and 41% are hiring dedicated AI and analytics personnel.
Notably, nearly half (45%) of IT leaders say they still don’t know how and where information is being shared.
Agent AI poses new challenges for governance
Mark McClain, CEO and founder of SailPoint, said the findings show that AI is often a catch-22 for organizations. While these tools are helping staff, they also create a new dimension of risk for security teams.
“AI tools can improve productivity, but they also create serious risks when operated beyond an organization’s visibility and governance,” he said.
“When sensitive information is entered into unauthorized models, it can be leaked, mishandled, or amplified by errors and illusions.”
McClain warned that the rise of agent AI could further amplify poor data management practices, putting companies at greater risk.
SailPoint noted that the need for increased visibility and monitoring is now a priority for many companies due to increased risk. in Previous research by SailPoint4 out of 5 (80%) organizations revealed that their AI agents performed “unintended actions” such as accessing or sharing inappropriate data.
According to the company, UK companies are adding up to 10,000 agents and machine IDs every month, so security teams can quickly become overwhelmed.
“As the use of AI systems becomes more prevalent, things will only get even more out of control if organizations fail to put in place appropriate guardrails. Add to this the fact that other tools fly by unnoticed, and things get even worse,” McLean commented.
“Organizations need to stop workarounds and take back control. This requires a combination of skills and awareness, but it also fundamentally leads to identity challenges.”
Follow us on social media
Follow ITPro on Google News and Add us as a preferred source Stay tuned for all the latest news, analysis, views and reviews.
Also, Follow ITPro on LinkedIn, ×, facebookand blue sky.
