Netskope, the leader in Secure Access Service Edge (SASE), released new research showing that regulated data – data that organizations are legally obligated to protect – accounts for more than one-third of sensitive data shared with generative AI (genAI) applications, posing a potential risk of costly data breaches for businesses.
New research from Netskope Threat Labs reveals that three-quarters of companies surveyed currently completely block at least one genAI app, reflecting enterprise technology leaders' desire to limit the risk of sensitive data exfiltration. However, less than half of organizations have data-centric controls in place to prevent sensitive information from being shared in input queries, and most organizations are lagging behind in adopting the advanced data loss prevention (DLP) solutions required to safely enable genAI.
Using a global dataset, researchers found that 96% of companies are currently using genAI, a figure that has tripled in the past 12 months. The average number of genAI apps companies are currently using is nearly 10, up from three last year. And the top 1% of adopters are now using an average of 80 apps, up from 14. Along with increased usage, companies are experiencing a surge in proprietary source code sharing within genAI apps, accounting for 46% of recorded data policy violations. These changing trends complicate how companies manage risk and require more robust DLP efforts.
The security and data loss control nuances that organizations are applying are positive signs of proactive risk management. For example, 65% of companies have now implemented real-time user coaching to guide users' interactions with genAI apps. Research shows that effective user coaching plays a key role in mitigating data risks, with 57% of users changing their behavior after receiving a coaching alert.
“Securing genAI requires further investment and increased attention. genAI use is pervasive in the enterprise with no signs of slowing down anytime soon,” said James Robinson, chief information security officer at Netskope. “Enterprises need to be aware that genAI output can accidentally expose sensitive information, spread misinformation, or introduce malicious content. A robust risk management approach is needed to protect data, reputation, and business continuity.”
Netskope Cloud and Threat Report: Enterprise research into AI apps also found that:
- ChatGPT is the most popular app, used by over 80% of businesses.
- Microsoft Copilot saw the most dramatic usage growth of 57% since its launch in January 2024.
- 19% of organizations ban GitHub CoPilot entirely
Key takeaways for businesses
Netskope encourages companies to review, adapt, and customize their AI or genAI-specific risk frameworks using initiatives such as the NIST AI Risk Management Framework. Specific tactical steps to address risks from genAI include:
- Know the current status: First, assess your existing use of AI and machine learning, data pipelines, and genAI applications. Identify vulnerabilities and gaps in your security controls.
- Implement the core controls: Establish basic security measures such as access control, authentication mechanisms, and encryption.
- Advanced control plans: Go beyond the basics and create a roadmap for advanced security controls. Consider threat modeling, anomaly detection, continuous monitoring, and behavioral detection to identify suspicious data movement from your cloud environment to the genAI app that deviates from normal user patterns.
- Measure, Start, Fix, Repeat: Regularly evaluate the effectiveness of security measures, and adapt and improve them based on practical experience and emerging issues.
Click below to share this article