From Iran to Venezuela, the U.S. military is deploying an “AI-first” approach to war. However, a recent dispute between the Pentagon and artificial intelligence company Anthropic has raised questions about whether deployment of this technology by the military is effective, safe, and legal.
What is the conflict between the Pentagon and humanity?
Anthropic has asked the military to commit to not using its AI model, Claude, in weapons that can identify and fire on targets without human input, commonly referred to as “fully autonomous weapons.” The company also sought to prohibit the military from using Claude to spy on Americans, particularly by analyzing location records, financial information, and other large data sets purchased on the commercial market.
The request came after the military reportedly used Claude to attack Venezuela and capture Nicolas Maduro in January. Claude is the basic model. That is, it is trained on large datasets to perform common tasks such as text synthesis, image manipulation, and audio generation.
The Department of Defense refused to agree to these restrictions, designated Anthropic a “supply chain risk” and blacklisted it from defense contracts. The company challenged this action as a violation of due process and the First Amendment.
How has the US military used AI in Iran?
The Pentagon is reportedly using AI to generate hundreds of recommendations for targets inside Iran, pinpointing their location, prioritizing their importance, and even assessing whether the targets are legitimate.
One of the AI systems used by the company, the Maven Smart System, is the culmination of a decade of collaboration between the Department of Defense and the technology industry to enhance intelligence analysis, surveillance, and targeting. The system allows the military to comb through vast amounts of information from satellites, data brokers, the military’s own drones and sensors, and social media to single out people and objects of interest.
The system also integrates Anthropic’s Claude, which the military uses to speed up this target analysis as well as generate other types of information and simulate battlefield scenarios.
How much is military spending on AI?
As explained in the Brennan Center report, the Department of Defense has allocated at least approximately $75 billion to AI-driven programs since 2016. This number does not include sensitive programs or programs where the scope of AI use is unclear, so the actual total may be higher.
In addition to surveillance and targeting, militaries are investing heavily in developing autonomous weapons that can select targets and take lethal action with varying degrees of human involvement. The war in Ukraine is being transformed by rapid technological advances in small drone warfare, increasing pressure on the U.S. military. The Department of Defense has requested $13.4 billion for this type of system in 2026 alone.
The military’s spending also includes $9 billion for data centers and computing capabilities tailored to security needs. This is the infrastructure that keeps the military’s AI and technology systems online.
The amount will almost certainly increase as the Department of Defense continues its AI-first approach.
Which companies have been contracted to develop military AI?
Much of the Pentagon’s AI funding has so far gone to data analytics giant Palantir and AI-powered drone maker Anduril.
Palantir and Anduril recorded their largest annual defense revenues in history in 2025. $903 million and $912 million, respectively. Palantir is the lead contractor for the Maven smart system used in Iran, Iraq, Syria, Ukraine, and Yemen.
Anduril specializes in autonomous systems such as drones, surveillance towers, and technology to defeat autonomous weapons deployed by adversaries. Powered by the company’s proprietary AI, the company’s drones can navigate hostile environments, communicate with each other, and even attack targets with little human involvement.
Last July, the Department of Defense awarded a contract to Anthropic and three other companies (OpenAI, xAI, and Google) to develop military applications for their underlying models.
What are your biggest concerns about the military’s use of AI?
The rush to deploy AI threatens to take away human expertise and judgment in life-or-death decisions, putting military and civilians alike at risk. For example, anyone who has used an AI chatbot knows that they frequently make mistakes, both obvious and hard-to-detect errors.
AI is also prone to inaccuracies in the military field. In 2024, Maven’s algorithm was able to accurately identify tanks about 60% of the time during clear skies, but the accuracy dropped to just 30% when it snowed. While the underlying model is persuasive, it also produces false or misleading analysis. This increases the likelihood that commanders and analysts will accept their recommendations, especially during wartime.
Therefore, even when humans are making the final decisions, relying on AI to select or justify targets can lead to erroneous results, and in military situations these mistakes can have fatal consequences. For example, a media investigation found that the Israel Defense Forces was unable to adequately substantiate AI-recommended targets for attacks in Gaza. One reason for this was that Israeli Defense Force analysts were under intense pressure to approve quickly.
AI also allows the military to collect and stitch together location data, social media posts, and other data to recreate people’s movements, connections, and habits at scale. This form of mass surveillance threatens privacy and civil liberties. Creating sensitive insights about Americans undermines our Fourth and First Amendment rights. The inferences this technology makes can be misleading and biased, such as when warning that satire or humor is a genuine security threat or when it associates protected characteristics, such as black or Muslim identity, with violence or other negative sentiments.
What are the risks of military reliance on commercial AI technology?
Transferring ownership of technological capabilities to high-tech companies limits the Pentagon’s visibility and control over the inner workings of the software that powers its most sensitive systems. The military routinely seeks resources and expertise not found in industry, but there is a risk of becoming too dependent on civilian-owned and controlled technology for its AI needs.
This opacity makes it difficult for militaries to examine their own targeting algorithms for hidden biases that could lead to misidentifying civilians as military targets. Also in 2025, the military warned that the battlefield communications systems designed by Palantir and Anduril were “black boxes” and could not determine whether unauthorized users could access applications and data. Although the military appears to have mitigated the issue, questions have arisen as to whether other systems have similar vulnerabilities.
Are there safeguards for the military to use AI responsibly?
Yes, but there are too few protections and the protections that do exist are insufficient.
Congress has the power to regulate the military, but it has done little about the use of AI. The White House is seeking to bridge the gap by issuing a national security memorandum in 2024 that will outline guardrails for the use of AI in national security, including testing to identify and minimize privacy risks. But the memorandum also gives agencies broad discretion to waive these safeguards, including if they “create an unacceptable impediment to the agency’s essential operations.”
The Department of Defense has its own directive on autonomous weapons, including those with AI-enabled capabilities. The directive does not prohibit weapons that can identify and fire targets without human intervention. Instead, senior Pentagon leaders are asking for a review of whether these weapons allow for “an appropriate level of human judgment regarding the use of force” before approving their use. This criterion can be met as long as there is broader human input in decisions about where and how such weapons are used.
The directive also mandates testing, training, and other procedures to minimize operational failures and harm to civilians. But the Pentagon’s oversight cuts have raised questions about its ability to comply. For example, the Pentagon has halved the number of personnel in the Office of the Director of Operational Test and Evaluation, which oversees much of the experiment, and suspended most civilian protection activities.
Finally, the military purchases large amounts of commercial datasets containing personal and sensitive information about American citizens without judicial oversight. Internal regulations issued by the Office of the Director of National Intelligence do not meaningfully limit this practice.
How can Congress better regulate the military’s use of AI?
Congress should ensure that the Pentagon not only explains how it is using AI, but also how much it is spending on the technology and the known risks and failures of the systems it acquires. Lawmakers should mandate the testing and evaluation of AI that poses a risk to the safety of military personnel, the privacy rights and civil liberties of Americans, and the lives of civilians. These requirements must be applied both before and during use of the technology.
Congress must limit the use of autonomous weapons in accordance with the laws of war, including prohibitions on inherently indiscriminate weapons. We also need to enact stronger privacy protections. A good start would be to pass the Fourth Amendment No-Sale Act, which prohibits government agencies from purchasing certain types of sensitive data belonging to Americans without legal process.
These safeguards require strong enforcement and monitoring. Congress should reverse the cuts and increase the budget for the Director of Operational Test and Evaluation. And we need to seriously evaluate how outsourcing AI capabilities to a small number of high-tech companies could impact national security.
