AI giant Anthropic recently refused to sign a contract with the Department of Defense that would give the US military “unrestricted access” to its technology for “all lawful purposes.” In signing the agreement, Anthropic CEO Dario Amodei requested two clear exceptions. They are a ban on mass surveillance of Americans and a ban on fully autonomous weapons without human supervision.
The next day, the United States and Israel launched a major attack on Iran.
This leaves many people wondering. How would a war with fully autonomous weapons be different? How important was Amodei’s ethical decision not to cross what Amodei called fully autonomous weapons and mass surveillance AI’s “red line”? What do these red lines mean for other countries?
This decision came at a significant cost to Anthropic. US President Donald Trump has ordered all US government agencies to stop using Anthropic’s advanced large-scale language model (LLM) AI family and conversational chatbot Claude. U.S. Secretary of Defense Pete Hegseth has designated Anthropic as a “supply chain risk,” which could affect the company’s potential for other contracts. And rival OpenAI quickly struck a deal with the Department of Defense instead.
Risks of fully autonomous weapons
AI chatbots are typically not weapons on their own, but they can become part of weapon systems. It won’t fire missiles or control drones, but it can connect to larger military systems.
You can quickly summarize intelligence, generate a shortlist of targets, rank high-priority threats, and recommend attacks. A key risk is that the process from sensor data to AI interpretation, target selection, and weapon activation has minimal or no human control or even awareness.
Fully autonomous weapons are military platforms that, once activated, carry out military operations independently without human intervention. They utilize sensors such as cameras, radar, and AI algorithms to analyze the environment and search, select, and attack targets.
For example, advanced helicopters already operate without human intervention. Fully autonomous weapons eliminate human control and oversight, with AI making the final attack and battlefield decisions.
This is alarming given recent research showing that advanced AI models chose to use nuclear weapons in 95% of cases in simulated war games.

(Don Feria/AP Content Services for Anthropic)
Risks of mass surveillance
Frontier AI models can quickly summarize large data sets and automatically generate patterns that look for signals of suspicious people or activity, even with weak associations. In a statement regarding Anthropic’s discussions with the Department of the Army, Amodei argued that “AI-enabled mass surveillance poses serious new risks to our fundamental freedoms.”
Records, communications, and metadata can be analyzed to scan entire populations. These systems can create descriptions and lists of people that automatically flag people who have been questioned, denied entry, or denied work. These systems pose risks to privacy because they can analyze data from multiple sources, such as social media accounts, and combine this with cameras and facial recognition to track people in real time.
AI models can also make mistakes. Even a small false association can grow dangerously when the system is running against millions of users.
AI models are also opaque. Not fully understanding how the AI model analyzes the data and reaches its conclusions, making the output even more difficult to challenge.
“All for legitimate purposes.”
The label “all lawful purposes” sounds like a safe margin. But this language means the government can use AI for any purpose it deems lawful, with few contractual restrictions.
This is important because legality is a moving target, laws are subject to change, are often ill-equipped to deal with rapidly changing innovations in real time, and are subject to changing interpretations.
This is why Anthropic, a company founded by former OpenAI employees with an explicit focus on AI safety and ethics, has argued that AI-powered mass surveillance is a new risk that cannot provide stable guardrails for legitimate purposes.
Anthropic famously developed an in-house lab to understand how Claude behaves, interprets queries, and makes autonomous decisions. Such efforts are important given the opacity of LLMs and the speed at which their capabilities are developing.
Will Project Maven take on bigger bets?
In some ways, this story is familiar. Technology companies have long been at the forefront of innovation, and while there is promise for progress, there is also the risk of abuse and negative impact. The closest historical comparison is Google’s Project Maven from 2018.
Google had a contract with the Department of Defense to help analyze drone surveillance footage. Google’s 4,000 employees protested the project, arguing that surveillance should not be part of the company’s mission. Google announced it would not update Maven, then released AI principles that included promises on weapons and surveillance.
This situation became a landmark event due to the force of employee activism and public pressure.
However, the Project Maven example is a reminder that business ethics and AI safety are moving issues. In early 2025, Google carefully walked back its pledge not to use AI in weapons or surveillance in order to win new lucrative defense contracts.
Anthropic’s current situation is similar in some ways to that of Google’s Project Maven. It shows companies and their leaders trying to put limits on military uses of AI. This illustrates the tension that arises when espoused corporate values collide with government and national security demands.
Humanity’s case is also unique because generative AI in 2026 is much more powerful than it was just a few years ago. Project Maven was only intended to analyze drone footage. Since current models can be used for many tasks, the risk of spillover is even greater.
LLMs like Claude can self-improve by learning from users’ modifications and refining their actions through iterative feedback loops. So what Unlimited Claude and its customer, the Department of Defense, were able to do is alarming.
Who sets the limits?
These events do not mean that humanity is based on its own principles or that the Department of Defense has its own demands. These are important questions that continue to resurface as AI becomes more powerful. It’s a question of who sets limits on the use of AI when national security is involved.
If “all lawful purposes” becomes the default, guardrails will depend on politics and legal interpretation. Safeguards are important for Canada and other countries. Ethics cannot be left to contract negotiations or corporate conscience.
These events illustrate the complexity of addressing AI ethics in practice. AI ethical principles and declarations are important and plentiful. At the same time, in practice, the ethics of AI are set through contracts, procurement rules, and the actual actions and oversight of various stakeholders.
Canada’s defense and public sectors are building AI capabilities, and Canada is working closely with U.S. defense and intelligence agencies. This means that procurement language and standards may change. If “all lawful purposes” becomes standard language in the U.S. national security market, there could be pressure on Canada and other countries to adopt similar terminology.
The encouraging news is that Canada is putting in place governance tools that can be strengthened and expanded. The Directive on Automated Decision Making aims to ensure transparency, accountability and fairness in the system. Impact assessments and public reporting are required.
Algorithmic impact assessment is a mandatory risk assessment tool associated with the Directive.
However, Canadians should check whether procurement standards list prohibited uses and be mindful of ongoing trends for audits and independent oversight to ensure that safety measures are not solely dependent on one government or top company.
