Application security models are rapidly changing as AI becomes part of daily development workflows, but it's not something teams are always prepared for. James Wickett, CEO of DryRun Security, explains why “AI everywhere” is forcing organizations to rethink what application security should look like as developers ship faster than ever.
Wickett describes the gaps in the original “shift left” movement. Despite years of effort, many security tools still aren't felt by developers to be specific or useful. Too often, the industry has tried to incorporate traditional approaches (pattern matching and noisy detection results) into modern pipelines, overwhelming development teams and preventing security teams from prioritizing work that may not correspond to real-world exploitability.
The conversation then turns to the differences in AI applications. Wickett argues that the moment you introduce LLM into production, your risk model changes. This means you have introduced a probabilistic system that accesses new data, performs actions, and behaves in ways that deterministic tools are not designed to evaluate. This mismatch actually manifests itself as a combination of high utilization and low reliability. Developers may rely on AI assistants for speed, but are concerned about instability and security setbacks.
Wickett will also share what the team is currently looking for: clearer definitions of AI risks, reference architectures, and best practice controls to cover issues like instant deployment and over-agency. The goal is not to slow down development. It's about evolving security with AI so teams can continue to move quickly and not act blindly.
