
IT and security leaders are ramping up their use of GenAI with a focus on protecting corporate assets from the risks posed by employees accessing AI applications using personal security credentials and devices.
These are the key findings from Microsoft’s latest Data Security Index, which surveyed more than 1,700 survey respondents. That includes 300 people in the U.S. who work for companies with 500 or more employees in a variety of industries.
One of the energy industry IT directors interviewed (who declined to be named) explained the main benefits that GenAI offers in combating evolving security threats and the use of fraud or shadow AI by employees. “Our GenAI system constantly monitors, learns from, and makes remediation recommendations far more than is possible with any kind of manual or semi-manual process.”
The dual benefits of vast amounts of data for analysis and scalability over human labor are recurring themes throughout the report. I will highlight what I consider to be the most important takeaways from the report below. Let’s start with how employees are using GenAI tools (IT/security professionals will probably tell you they’re misusing them), and how the former creates risks that must be addressed.
A drive to innovate and improve productivity
Those at the cutting edge of AI may assume that it has matured to the point where employee use is highly structured and broadly controlled by corporate management. This may suggest that abuse of these tools is decreasing. But Microsoft’s data shows that “quite the opposite is happening.”
The percentage of security leaders reporting that their employees access GenAI for work purposes using personal credentials rather than corporate IDs increased to 58% of respondents in 2025. In 2024, it was 53%. At the same time, the percentage of companies saying their employees use personal devices to access GenAI for work increased from 48% in 2024 to 57% in 2025.
Additionally, 32% of survey respondents indicated that data security incidents involve the use of GenAI tools, and 35% of those surveyed expect the number of incidents due to the use of GenAI to increase in the next year.
As a result of these activities, 47% of companies surveyed say they have GenAI-specific controls in place, up from 39% in 2024.
It’s insightful to know where and how security leaders are responding to threats and what GenAI-related controls are top of mind. The controls they are putting in place are focused on protecting data, improving employee skills and knowledge, and monitoring fraud as GenAI continues to spread.
A quote from a CISO exemplifies the idea of increasing control rather than restricting access. “We are working to not only block unsanctioned GenAI tools, but also increase the number of sanctioned ones and direct people to them.” The table below details which controls are prioritized.
| GenAI related controls | Percentage of respondents who prioritize |
| Prevent sensitive data from being uploaded to GenAI tools | 42% |
| Train your employees on the safe use of GenAI | 38% |
| Detect anomalous user activity and risky users | 37% |
| Identify sensitive data uploaded to or generated by GenAI | 37% |
AI-powered protection
In response to the proliferation of employees using GenAI without the necessary controls, business and technology leaders are ramping up their use of their own AI and agents and are sharing insights on how to use AI to strengthen security and governance.
At a high level, 82% of those surveyed said they planned to use GenAI for data security operations, a 28% increase from 64% of respondents in 2024. Additionally, while 39% currently use agents for data security, a much higher number, 58%, say they have tried or considered using agents for data security, indicating even greater adoption.
The specific agent AI data security use cases they cite are similarly enlightening and certainly align with their efforts to prevent incidents and breaches based on employee AI usage outlined above.
| Examples of using agents for data security | Percentage of respondents |
| Detect critical risks | 40% |
| Automatically protect, block, flag, and classify your data | 36% |
| Investigate potential data security incidents | 35% |
| Make recommendations to make your data more secure | 35% |
| Reduce false positive alerts | 35% |
Following these data points, Microsoft also made recommendations for “moving forward” that include using GenAI agents to accelerate responses and reduce noise, and doing so while leveraging agents. That’s because agents “provide scalable automation for data discovery, protection, and remediation.”
One last thing to note. This study and Microsoft’s analysis argue for a unified security platform that reduces the proliferation of tools and the resulting fragmented visibility of security data. 64% of respondents said they expected improved threat detection and response as a result of platform consolidation, and 56% said they expected improved visibility of data risk across workloads.
My colleague Kieron Allen will provide more context on the findings later this week. Below is a series of related analyzes with additional insights.
