Use of dangerous shadow AI remains widespread

Applications of AI


This voice is automatically generated. Please let us know if you have any feedback.

Diving overview:

  • The sporadic adoption of AI by enterprises constitutes a significant security risk. Report published by security company Netskope on tuesday.
  • Many employees continue to use AI tools through personal accounts that lack proper security guardrails and are outside the purview of an organization's IT team, creating opportunities for hackers to manipulate those tools and infiltrate corporate networks.
  • “This combination of new AI-driven threats and traditional security concerns will define the evolving threat landscape into 2026,” Netskope said in the report.

Dive Insight:

Shadow AI is Known issues But it remains a persistent challenge for organizations racing to incorporate AI into their workflows.

The Netskope report, based on cloud security analysis from October 2024 to October 2025, found that nearly half (47%) of people using generative AI platforms do so through personal accounts that are not supervised by the company. Unsupervised use of AI creates gaps in a company's security defenses that hackers can exploit.

“A significant percentage of employees rely on tools like ChatGPT, Google Gemini, and Copilot, using credentials that are not tied to their organization,” Netskope said.

This data paints a mixed picture of individuals' trends in AI use. However, the percentage of people using personal AI apps (47%) has significantly decreased from 78% a year ago. Similarly, the percentage of people using company-approved accounts increased from 25% to 62%. However, the percentage of people switching between personal and business accounts increased slightly from 4% to 9% year over year. The findings show that companies “still have work to do to provide the level of convenience and functionality that users desire,” Netscope said.

Using personal AI in a corporate environment create multiple risksThis includes incomplete regulatory compliance and insecure API connections between external AI services and internal servers. Data breaches remain one of the most common consequences of unscrutinized AI use, and Netskope said the “number of incidents where users submit sensitive data to AI apps” has doubled year-over-year, with the average company experiencing 223 such incidents per month.

Security experts say the best way for organizations to police the use of shadow AI and prevent such incidents is to prioritize implementing AI governance processes.

“Transition to managed type” [AI] “The accounts are encouraging,” Netskope said, “while also highlighting how quickly employee behavior can outpace governance.” The company recommended that organizations adopt “clearer policies, better provisioning, and continuous visibility into how AI tools are actually used across the workforce.”



Source link