Artificial intelligence (AI) is already deeply embedded in the way modern militaries sense, analyze, decide, and operate. AI is becoming the foundation of defense effectiveness, from mission planning through AI-enabled intelligence analysis and predictive logistics and maintenance, to autonomous platforms and battlefield decision support systems.
This transformation is not happening in isolation. Many of today’s defense innovations rely on dual-use technologies, systems originally developed for the civilian market and later adapted for military use. Commercial cloud computing, advanced chips, computer vision models, and autonomous capabilities are now core components of modern defense architectures.
Over the past few months, I’ve been speaking with a number of startups and technology companies working on developing defense and dual-use capabilities. Everyone is chasing the rising defense market. The claim to fame of nearly all companies pursuing these fields is deep, modern, and effective AI systems.
Are AI-driven defense systems resilient enough?
This speed of convergence is unprecedented. Are AI-driven defense and dual-use systems secure enough to play the role they are required to play today?
The answer is extremely important for building national resilience. Because AI dramatically expands operational capabilities while reshaping the cyber risk landscape in ways that traditional defense models were never designed to address. This tension is becoming increasingly difficult to ignore.
According to the World Economic Forum’s (WEF) Global Cybersecurity Outlook 2026, AI is now a major force reshaping cyber risks around the world. An overwhelming 94% of cyber leaders surveyed said AI will be the biggest driver of change in cybersecurity risks this year.
Among respondents, 87% cited AI-related vulnerabilities as the fastest growing cyber risk in 2025. To address the practical challenges of AI cybersecurity, 54% of organizations recognize insufficient skills to implement AI in cybersecurity, and 41% point to the need for human oversight of AI operations.
These numbers represent a world in which technological capabilities are accelerating faster than the structures that secure and regulate them.
This gap has direct operational implications. Modern defense systems no longer function as closed, siled, air-gapped platforms. They operate as complex digital ecosystems built on software updates, remote connectivity, distributed sensors, and shared data pipelines. AI models are at the core and sometimes at the edges of this architecture. Correlate sensor inputs, recommend actions, prioritize threats, and support real-time decision making. Eventually, AI may be able to act on these decisions.
This is especially true in dual-use systems. The same computer vision algorithms that enable self-driving cars on public roads are now being built into unmanned aircraft systems. Commercial satellite imagery platforms provide feeds into military intelligence workflows. A cloud-based analytics engine supports command and control environments.
These technologies offer tremendous benefits. These reduce costs, shorten development cycles, and allow defense forces to benefit from the pace of commercial innovation. But they can also introduce commercial vulnerabilities directly into military systems.
AI expands the attack surface in unprecedented ways. Models can be manipulated through contaminated training data or adversarial inputs. Automated systems can be fooled by machine speed. Shared software libraries and open source components create hidden dependencies that attackers can exploit.
In fact, dual-use AI systems blur the lines between civilian and military cyber domains. AI-related vulnerabilities discovered in commercial environments can have an impact far beyond that. This reality is forcing a redefinition of what defense security actually means.
Need: Redefining the meaning of defense security
Historically, defensive superiority has been measured by platforms, munitions, and physical reach. There is now an increased reliance on the integrity of algorithms, data pipelines, and decision-making systems. Compromised sensor feeds or manipulated AI models can reduce situational awareness just as much as physical sabotage.
At the same time, AI is becoming deeply integrated into command and control environments. Today, decision support systems synthesize streams of information that cannot be handled by human teams alone. Autonomy allows one operator to manage multiple platforms simultaneously. While these features are essential for modern multidomain operations, they also introduce new failure modes.
If an AI system misinterprets context, amplifies noise, or obscures uncertainty, human operators may not immediately recognize the error. In a high-tempo environment, trust in automation can become a dependency.
It is important to emphasize that this is not an argument against AI. This is an argument against deploying AI without proper governance, resilience, cybersecurity, and validation.
WEF data highlights that many organizations remain unprepared. The combination of a high likelihood of incidents, poor governance, inexperience in responding to AI cyberattacks, and a lack of skills suggests structural vulnerabilities. In a rapidly escalating defense environment, this vulnerability could have strategic implications.
The key challenge lies in governance. Traditional defense certification models assume a deterministic system whose behavior can be thoroughly tested. AI systems don’t work that way. They can learn from data, adapt to patterns, and react unexpectedly to unfamiliar situations.
This makes verification, explainability, governance, and accountability difficult. Military commanders must be able to understand why the system has issued the recommendations. Engineers must be able to verify performance under adversarial conditions. Policy makers need to be confident that automated systems comply with doctrinal, legal, ethical and operational constraints. Without these guardrails, AI risks becoming a powerful but highly dangerous layer inserted between decision makers and reality.
The cybersecurity aspect makes this challenge even more acute. Attackers are already using AI to automate reconnaissance, create adaptive malware, and scale up their penetration attempts. Defenders increasingly rely on AI to detect anomalies and respond at machine speed. This accelerates feedback loops, making both attacks and defenses more automated, more complex, and less transparent.
For defense systems built on dual-use foundations, that loop becomes especially dangerous. Commercial AI platforms are not designed with the hotly contested battlefield in mind. They are optimized for performance, scale, and efficiency. Closing this gap requires intentional action.
AI security is a strategic defense issue
AI security needs to be treated not just as a technical issue, but as a strategic defense issue. Cyber risks associated with AI should be elevated to the national security framework alongside traditional threats and integrated into defense planning. Procurement decisions must consider not only functionality and cost, but also resilience to cyberattacks.
Defense organizations need to invest in AI-savvy cyber expertise. The WEF finding that 54% of organizations perceive a skills shortage should serve as a warning. Securing AI systems requires experts who understand AI, cybersecurity, and adversarial threat models.
Explainability and continuous verification must be non-negotiable. AI systems deployed in defense environments must be transparent, testable, and continuously monitored for drift and tampering. Trust must be designed and verified. That cannot be assumed.
Finally, cooperation between government, industry and allies is essential. AI-centric cyber risks do not respect institutional or state boundaries. Threat intelligence, defensive techniques, and best practices must flow across sectors that increasingly share the same technology infrastructure.
Israel has a unique position in this landscape. The company’s defense division is deeply connected to a world-class technology ecosystem, and its experience operating under persistent cyber threats provides important perspective. Leadership in AI-driven defense depends not only on the speed of innovation but also on the ability to protect what is built.
The strategic competition unfolding today is not just about the companies that develop the most advanced AI models. It’s about who can reliably deploy them, protect them under pressure, and maintain confidence in their operations when the going gets tough.
The lessons from the World Economic Forum’s Cybersecurity Outlook are clear. The future of defense will also be defined by cyber and AI security. In the AI era, strategic advantage will accrue to those who can ensure that the systems that guide decision-making are reliable, resilient, and secure when it matters most.
Esti Peshin is a global cybersecurity, AI, and aviation executive and former Vice President and General Manager of the Cyber Division of Israel Aerospace Industries. She is a general aviation and ULM/LSA licensed pilot and flight instructor and a member of Forum Dvora.
