Greg Bayes-Brown, who helped shape AI policy in his previous career in biotechnology research, says that even being the rule-maker can’t help but break the rules. Although he understood the technology and its risks, his job required him to access NotebookLM using an unverified personal corporate Google account to organize large amounts of information that typically required extensive back-and-forth between customer service and other departments. He estimates that this shortcut reduced 150 hours of work to 30 minutes.
Baze-Brown said the pressure on employees to use AI more effectively has increased as IT departments debate for months about how to regulate AI tools. He felt that the risk of company information being leaked from a personal NotebookLM account (Google says it does not use the data entered into NotebookLM to train models) was lower than the risk of being reactive.
“The risk of data being exposed is negligible, really small,” he says. “But the possibility of losing to our Chinese peers is a huge risk.”
Shadow AI is shorthand for circumventing your company’s IT policies to prompt your favorite chatbot or have your agents organize your inbox. Data shows that this practice is spiraling out of control, and most of us are probably violating the policy. A Microsoft survey of UK workers found that 71% had used an unapproved consumer AI tool at work, and half said they used it weekly. According to AI security platform Reco, midsize companies typically use 200 unauthorized AI tools per 1,000 employees. Microsoft’s 2024 report found that nearly 80% of employees using AI rely on proprietary tools.
Leslie Nielsen, chief information security officer at Mimecast, said of the rise of shadow AI: “A thousand cuts kills you, and people don’t understand that.” When someone uploads a document containing financial data to an AI tool, a chatbot or agent, with the right prompts, can regurgitate that data and its analysis to someone outside the company. Three years ago, after software engineers embedded internal code into ChatGPT, Samsung banned employees from using the generated AI tools on company devices. Amazon also became alarmed when ChatGPT’s responses to some prompts began to closely resemble internal data. These incidents occurred before OpenAI deployed an enterprise version of the chatbot that does not use input to train the model. But since then, specialized apps with built-in AI capabilities have flooded the market, creating a plethora of security threats for IT departments to monitor.
Companies are eager to focus on security and driving growth, but the race to be the top worker may mean breaking the rules.
Shadow IT, or the use of technological shortcuts or software not authorized by a company, is not new. In 2014, the Department of Health and Human Services reached a $4.8 million settlement with Columbia University and NewYork-Presbyterian Hospital after a doctor who developed an application for both hospitals and schools disabled privately owned computer servers on the network that held patient health information. Thousands of patient records were accessible via Google.
Today, secretly feeding company information to AI is especially appealing, but it can also be accidental. People are developing emotional attachments to AI tools. No other technology can be so highly personalized. Innovation in the office typically happens from the top down. Executives choose the tools the company uses, deciding whether it’s Gmail or Microsoft 365, Slack or Discord. The AI hype cycle has reversed that imperative. Big tech companies are unlocking generative technology and putting the onus on white-collar workers to figure out what tasks can be automated.
Today, competing to become a top worker may require you to break the rules.
“The problem with shadow AI is even worse than shadow IT,” said Nicole Jiang, co-founder of Fable Security. “Enterprises are actually allowing and driving AI adoption at a pace we’ve never seen before,” which leaves IT professionals “trying to think about how we can best protect it, how can we allow users to explore it instead of blocking it.”
IT professionals tend to understand the nuances of AI implementation, where it poses potential risks to the company, but is likely to benefit the bottom line. In a survey of 1,000 IT leaders conducted by software company Freshworks, nearly 80% said they believe employees who use unapproved AI tools are more productive. However, 86% said they witnessed at least one negative incident involving unauthorized AI use in the past year, from non-compliance to security beaches.
Preferring AI for which companies do not have an account over AI for which they do have an account is not simply an evolution of the iPhone vs. Android debate. This technology is fundamentally changing the way people work, encouraging non-technical workers to experiment with technology in ways never seen before. “Six months ago, the conversation was, ‘We’re going to use Claude because we think the output is better,'” says Harley Sugarman, CEO and founder of security company Anagram. While many companies are now endorsing enterprise AI tools, employees are looking for other apps that are more tailored to their roles, from human resources to marketing to coding.
The use of worker AI remains a blind spot for many companies. A February survey of 345 business leaders by consulting firm Protiviti found that about half do not know the extent to which their employees are using AI. Only 4 in 10 companies had formal AI governance policies in place. But businesses are increasing spending on enterprise AI, with 90% of IT leaders at large enterprises saying their workplaces plan to increase their budgets for AI tools this year. According to a study by agent AI platform Writer and research firm Workplace Intelligence, half of white-collar workers already use agents. Microsoft took Agent 365 out of preview and made it generally available this month, saying the move would allow businesses to “control agent sprawl” and “monitor, manage, and secure agents and their interactions.”
“It’s probably going to get worse before it gets better,” Sugarman says of the shadow AI dilemma. “You can imagine the next evolution of this problem is that these agents start improving themselves, making more decisions, building more software, without even input from the end user. At that point, you’re in the realm of science fiction.” But he doesn’t think cybersecurity is doomed. IT professionals need to understand how people are using AI and how agents are working. Training is needed for low-skilled, white-collar workers on how to use AI and why security protocols are important. “Right now, no one can really solve it.” As long as that message remains, it’s unlikely that employees will stick to a small number of approved tools. Despite millions of employees reporting using shadow AI, there are few known outcomes. But it only takes one failure for an IT nightmare to become a reality.
amanda huber I’m a senior correspondent for Business Insider, covering the technology industry. She writes about the biggest technology companies and trends.
Business Insider’s Discourse articles provide perspectives on the most pressing issues of the day, powered by analysis, reporting and expertise.
