Enterprise AI security requires increased visibility across users, agents, data, and connected systems.

Proofpoint outlined an AI security approach that integrates collaboration security, data protection, AI governance, and runtime controls as enterprises deploy AI tools and agents across their environments.
In a recent media briefing, Proofpoint APJ cybersecurity strategy director Jennifer Chen said the company is focused on the intersection of people, data, and AI as its operations expand beyond email and traditional collaboration channels.
“Humans are no longer working alone, but in collaboration with AI tools and autonomous agents,” Chen said. “We believe we are looking toward a future where humans, agentic AI, and systems all work together in what we define as agentic workspaces.”
Chen said Proofpoint’s business has grown since it was taken private by Thoma Bravo in 2021. The company currently serves nearly 3 million customers worldwide, including large corporations, government agencies, and public sector organizations. At APJ, our team has grown to over 300 people, nearly three times the size in 2019.
Proofpoint continues to process email data at scale and has visibility into trillions of emails, Cheng said. Although the company is still known for email security, email has become more than just a communication channel, he said. It is also used as an identity artifact that attackers use to target organizations through phishing, business email compromise, and account takeover.
Proofpoint’s current focus is on collaboration tools, SMS messaging, phishing simulations, cloud accounts, insider threats, and data protection. The company is integrating several acquisitions into its platform to support data security and governance, including data loss prevention, Chen said.
The briefing also featured the company’s detection platform, Proofpoint Nexus. According to Cheng, Nexus uses data from across Proofpoint’s systems to support detection models to help organizations understand risk across their users, data, and AI activities.
Tim Choi, group vice president of product marketing at Proofpoint, said the introduction of enterprise AI raises three key security concerns: how users access AI tools, how organizations build and deploy AI agents, and how AI tools connect to enterprise systems and data.
Proofpoint’s research found that 68% of employees approve of using AI tools that are not approved by their employer, Choi said. These tools include both web-based services and software installed on endpoints, such as desktop AI applications and AI-enabled browsers.
“The first question many security professionals have is: What are these tools and what are users using to accomplish their jobs?” Choi said.
He said it’s important to visualize prompts, responses, and connections because AI interactions can include attempts to extract information, circumvent guardrails, or return unsafe output. AI tools may also connect to messaging systems, middleware, repositories, or business data.
When asked what the first steps organizations should take, Choi said companies need to start with governance before introducing technical controls. “Organizations need to develop policy documents for the safe use of AI,” he said, adding that business and functional teams need to agree on how AI will be used before mapping controls to risk scenarios.
Choi said the AI agent’s behavior is not limited to one prompt and response, adding a new layer of risk. Agents can call language models, MCP servers, tools, and services over multiple steps.
“Every microstep can pose a risk, increasing the importance of understanding what’s going on inside that agent,” he said.
Proofpoint’s AI Security portfolio includes AI Security for Access, AI Security for Agents, and AI Security for MCP. According to Choi, AI Security for Access focuses on detecting AI tools, controlling usage, and monitoring prompts, responses, links, content, and payloads. AI Security for Agents provides visibility into agent behavior and enforces guardrails and runtime controls. AI Security for MCP, on the other hand, acts as a gateway between AI tools and enterprise systems.
Existing security tools are still part of a company’s AI security plan, Choi said. Proofpoint is in discussions with industry peers to integrate through MCP servers, which can link security tools and support rapid retrieval of information across connected systems.
Concerns about data leaks remain
Richard Combes, head of data security sales engineering for EMEA and APJ at Proofpoint, said data security is becoming more difficult as data volumes grow and AI tools access more enterprise content.
“We expect data volumes to grow 300% over the next five years,” Combes said. “More data will be processed by more systems at machine speed.”
According to Combes, the key risks include data loss across multiple channels, excessive internal file access, insider abuse, and GenAI applications exposing sensitive data at scale. Shadow AI is a concern because employees may use unauthorized tools outside of company-approved contracts and controls.
Asked about first steps, Combes said organizations need to map their AI data early. This includes identifying which AI tools are being used, what data is being accessed, where that data comes from, who owns it, and what outputs and logs are being created. These measures need to be done in tandem with governance policies, access controls, guardrails and regular risk checks, he said.
He cited the example of New South Wales, where a contractor working on a flood recovery program reportedly entered a spreadsheet of around 3,000 names into ChatGPT to help format and extract the data. Combs said the files contained contact information and, in some cases, personal health information.
Combes demonstrated Proofpoint’s AI Data Governance module, which shows approved and shadow AI applications, risky prompts, uploaded files, connected repositories, and users who pose a high risk. The system can also identify AI tools connected to platforms such as SharePoint and revoke those connections, he said.
He also explained how the platform handles sensitive data shared with AI tools. In one example, an authorized AI tool was allowed to process the code, but the plaintext password was edited before being sent.
“The goal is not to block all uses of AI, but to prevent sensitive data from entering AI,” Combes said.
When asked where organizations still lack visibility, Chen said that while many organizations are looking closely at agents and AI, the broader question is how AI will impact existing gaps. “What AI actually does is accelerate the threat,” she says. “That widens the gap, increases the prevalence of threats, and increases volume.”
According to Cheng, organizations need to assess whether their existing tools address current risks, while also looking at behaviors, intentions, and interactions across humans, agents, AI systems, and communication channels.
