Most enterprise AI activities are carried out without the knowledge of the IT and security team. According to Lanai, 89% of AI use within an organization are invisible, creating risks in data privacy, compliance and governance.
This blind spot is growing as AI capabilities are built directly into business tools. Employees often connect personal AI accounts to work devices and use unauthorized services, making it difficult for security teams to monitor usage. Lanai says this lack of visibility has exposed businesses to data leaks and regulations violations.
AI use cases are clearly hidden
In healthcare, workers used AI tools to summarize patient data, raising concerns about HIPAA. In the financial sector, teams preparing for IPOs unconsciously moved sensitive information to their personal ChatGPT accounts. Insurance companies used built-in AI capabilities to segment customers with demographic data in ways that could violate the anti-discrimination rules.
Lanai CEO Lexi Reese said one of the most surprising findings came from an internal tool that was already approved.
“One of the biggest surprises was the hidden innovation within already approved apps (SaaS and internal apps). For example, sales teams discovered that they uploaded Zip Code demographic data to Salesforce.
“On paper, Salesforce was a 'approved' platform. In fact, embedded AI created regulatory risks that CISOs have never seen before. ”
Lanai says these examples reflect a larger trend. AI is often built into tools such as Salesforce, Microsoft Office, and Google Workspace. These features are part of the tools already used by employees, allowing them to bypass traditional controls such as data loss prevention and network monitoring.
How Lanai's Platform Works
To address this issue, Lanai launched an edge-based AI observation agent. The platform installs lightweight detection software directly on employee devices. Working at the edge allows you to discover AI activity in real time without routing data through a central server.
Lease explained that the design requires solving complex engineering challenges.
“When you run an AI model at the edge, the script will flip. A simple path is to get a static list. This is not dynamically updated at the speed at which employees are using it.
“This is what AI security startups do, but these architectures are dated very quickly because they come from static lists from the Tops Down committee, which is creating new data extension risks.
“We designed a rapid detection model to run directly on laptops and browsers without leaving the boundaries of the device. The difficult part was to compress detection into something lightweight that doesn't compromise performance, but it was rich enough to detect not just app names but quick interactions.
“Once you know that interactions are AI, SaaS has a risk and workflow intelligence model that clusters prompt patterns instead of scanning static keywords. It allows you to maintain privacy, minimize latency, and scale thousands of endpoints without eliminating performance.”
Lanai says that software can be deployed within 24 hours using standard mobile device management systems. Once installed, it helps organizations to understand their AI footprint and create policies that govern their use.
Shutdown Governance
The company emphasizes that its goal is not to completely block AI. Instead, it focuses on giving CISOs and other leaders the information they need to make decisions. By seeing which tools are used, companies can assess them for risk and decide which ones to approve or limit.
For regulatory industries such as healthcare, Reese said app-level surveillance must be exceeded to distinguish between safe and insecure AI use.
“The trick is that an “approved platform” does not mean an “approved workflow.” We'll look at the prompts + data patterns as well as the app.
“For example, in large hospital networks, clinicians used AI summary functionality embedded within a web-based EHR portal to summarise automated draft patient visits. On the surface, this was within an authorized EHR platform, but the workflow introduced PHI into an AI model that is not part of the hospital's HIPAA business agreement.
“Ranai can detect differences by not flagging “EHR use” in general, but by recognizing specific prompts + data patterns that communicate sensitive patient records to insecure AI workflows.
“We detect signals such as: whether the data type at the prompt that the AI feature was called, and whether the workflow matches the company or the sensitive use case of regulator definitions. This allows us to isolate compliance innovation from risky misuse in real time and do so within the same SaaS tool where most legacy monitoring fails.”
Measuring the impact
Ranai says organizations using the platform are seeing significant improvements in reducing AI-related incidents.
“In that healthcare system, a 'data exposure incident' is primarily when a clinician embed patient records, lab results, or AI-protected health information in an EHR or productivity app.
“Within 60 days of Lanai's deployment, we saw a drop of up to 80%, not because our customers stopped using AI, but because they had visibility to flag and redirect unsafe workflows in the end,” Lease said.
A similar pattern has emerged in the financial services sector, with organizations reporting up to 70% reductions in unapproved AI usage to analyze sensitive financial data within just a quarter. In some cases, this drop occurs because an unauthorized application has been shut down. Other times, organizations maintain productivity benefits by deploying AI use cases in a secure, approved environment within an approved technology stack.
Webinar: Why AI and SaaS are on the same attack surface