Retired general warns: America cannot fight an AI arms race with technology it doesn’t control

Applications of AI


The United States is entering a new phase of strategic competition, where artificial intelligence is no longer a new capability but a defining element of military power. In the ongoing AI arms race, speed is of the essence. Competence is important. But above all, control is important. That’s why the recent conflict between Antropic and the Department of Defense should be of concern to anyone focused on America’s national security.

At the heart of the controversy is a simple but profound disagreement. The question is: Who decides how advanced AI systems are used in the military? Anthropic, the developer of Claude and its ultra-powerful model Mythos, sought to impose limits on how its technology could be deployed by drawing red lines around certain applications of its technology. The Department of Defense asserted that it must retain the ability to use AI tools for any lawful purpose to protect the country. When those positions turned out to be incompatible, the relationship collapsed.

Humanity was eventually designated as a supply chain risk, forcing the Army Corps to look elsewhere for AI capabilities. Since then, details have emerged of that model, Mythos, which was said to be “too dangerous” for public release, adding new and alarming concerns. Mythos reportedly has the ability to autonomously identify and weaponize undiscovered cybersecurity vulnerabilities, meaning it is vulnerable to cybercriminals without proper guardrails. Because this new tool is potentially so powerful, access by Anthropic has been restricted.

This episode should serve as a wake-up call because it shows that the current structure of the U.S. AI ecosystem—a black box driven by closed systems lacking transparency—is fundamentally out of alignment with national defense requirements.

Currently, the Department of Defense purchases access to AI capabilities but does not control them. The training, testing, and continued development of these models is firmly in the hands of private companies with their own governance frameworks, risk tolerances, and commercial incentives. That reality creates dangerous dynamics. It gives a small number of unaccountable private companies a de facto veto power over how the United States uses one of the most important technologies of our time. That is not a sustainable model for a constitutional republic. Nor is it a viable basis for military control.

Systems constrained by external approval processes, changes in corporate policy, or the risk of sudden disruption cannot move at the pace that modern warfare demands. And in a strategic competition defined by repetitive cycles measured in weeks rather than years, these constraints do more than slow the United States down. They create gaps.

For example, China and its alliance partners are actively working to deploy AI capabilities at scale, leveraging open source models that can be adapted to a wide range of military and intelligence applications. Systems like DeepSeek are not constrained by the same corporate governance structures that form American companies.

They are designed to be modified, expanded, and integrated across a broad ecosystem that includes not only the Chinese military but also a growing network of partner countries that conflict with the United States.

It creates an asymmetric threat. While the United States is debating allowing the use of AI through contracts with private vendors, competitors are building flexible systems in collaboration with states that can be quickly customized to suit operational needs. If this gap continues, the United States risks being placed at a significant military disadvantage.

The solution is not to abandon the private sector, which remains a source of incredible innovation and technological leadership. Nor is it an abandonment of the ethical considerations that must remain at the core of how the United States approaches the use of force. But it means recognizing that the current model of lending governments access to closed, proprietary systems over which they cannot fully control is insufficient for the demands of strategic competition.

The U.S. government needs to start investing in a different approach. The goal is to develop high-performing, secure, and adaptable open-source AI models that the U.S. government and its closest allies can control, audit, and deploy without external constraints. In either case, the need for careful guardrails is not eliminated. The role of AI in warfare requires an important and legitimate discussion. From autonomy and targeting to monitoring and escalation. But these discussions should be led by elected officials and military leaders who are accountable to the American people, not by private companies’ acceptable use policies.

This strategic realignment can take several forms. This may involve government-led model development, partnerships with trusted research institutions, or the creation of open-weight models designed specifically for defense applications. This could include collaborative frameworks that ensure interoperability while maintaining national control, and new procurement strategies that prioritize transparency and changeability over convenience.

However, regardless of the path you choose, success depends on getting the mechanics right.

The United States has long understood that we cannot outsource our security infrastructure. We build our own ships. We design our own weapons. We maintain command of the systems that support military superiority. Artificial intelligence is no exception.

Building effective public-private partnerships that serve our national defense requires more than just technical ability: it requires trust, integrity, and sound processes. This means establishing clear guardrails, aligning incentives, and ensuring that both government and industry share responsibility for the risks and consequences of implementing these systems. Done right, such a framework can leverage private sector innovation while retaining government control over how these capabilities are ultimately used.

Anthropic episodes run the risk of being previews rather than anomalies. Unless we act now to ensure that the United States and its allies have access to AI systems that they can truly control, this may be the warning we miss.

The opinions expressed in Fortune.com commentary articles are solely those of the author and do not necessarily reflect the author’s opinions or beliefs. luck.



Source link