Mr. Hegseth and Anthropic CEO plan to meet

AI For Business


WASHINGTON — U.S. Defense Secretary Pete Hegseth gave Anthropic’s CEO a Friday deadline to release the company’s artificial intelligence technology for unrestricted military use or risk losing government contracts, according to people familiar with Tuesday’s talks.

Anthropic has developed the chatbot Claude and remains the last company not to provide its technology to the new internal US military network. CEO Dario Amodei has repeatedly raised ethical concerns about unchecked government use of AI, including the risks of fully autonomous armed drones and AI-assisted mass surveillance that could track dissent.

In addition to terminating the contract, Pentagon officials warned that Anthropic could be designated as a supply chain risk or use the Defense Production Act to effectively give the military the power to use the product even if the military does not approve of its use, said the person, who was not authorized to publicly comment on the talks and spoke on condition of anonymity.

The official said the atmosphere at the meeting was friendly, but that Mr. Amodei did not budge on two areas he had set as red lines for humanity: fully autonomous military targeting operations and domestic surveillance of American citizens.

This highlights the debate over the role of AI in national security and concerns about how it might be used in high-stakes situations involving lethal force, classified information, or government surveillance. This also comes as Hegseth has vowed to eradicate what he calls “woke culture” within the military.

“Powerful AI monitoring billions of conversations by millions of people could gauge public sentiment, detect when disloyal groups are forming, and root them out before they spread,” Amodei wrote in an essay last month.

The Pentagon did not immediately comment on the development, which was first reported by Axios. Defense officials had earlier confirmed the meeting between Hegseth and Amodei.

Anthropic is the only AI company approved for classified military networks

Last summer, the Department of Defense announced it would award defense contracts to four AI companies: Anthropic, Google, OpenAI, and Elon Musk’s xAI. Each contract is worth up to $200 million.

Anthropic is the first AI company approved for classified military networks and works with partners such as Palantir. The remaining three companies are currently operating only in unclassified environments.

By early this year, Hegseth had only covered two of them: xAI and Google. In a January speech at Musk’s SpaceX spaceflight company in South Texas, he said he was ignoring any AI models that would “not lead to war.”

Hegseth said his vision for military AI systems means they operate “without ideological constraints that limit legitimate military use,” before adding that the Pentagon’s “AI will never wake up.”

The Secretary of Defense said Musk’s artificial intelligence chatbot Grok will join the Pentagon’s network GenAI.mil. The announcement comes days after Grok, which is part of Musk’s social media network X, came under global scrutiny for producing highly sexualized deepfake images of people without their consent.

OpenAI announced in early February that it would also join the military’s secure AI platform, allowing military personnel to use a custom version of ChatGPT for unclassified tasks.

Anthropic bills itself as more safety-oriented

Anthropic has long pitched itself as the more responsible and safety-focused of the big AI companies, ever since its founders left OpenAI to found the startup in 2021.

Owen Daniels, associate director of analysis and fellow at Georgetown University’s Center for Security and Emerging Technologies, said the uncertainty with the Pentagon is testing those intentions.

“Anthropic’s peers, including Meta, Google, and xAI, are happy to comply with the department’s policy of using the model for all lawful applications,” Daniels said. “The company therefore has limited bargaining power here and risks losing influence in driving the sector’s AI adoption.”

Amid the AI ​​boom following the release of ChatGPT, Anthropic worked closely with the Democratic administration of former US President Joe Biden in voluntarily subjecting its AI systems to third-party monitoring to prevent national security risks.

CEO Amodei warns of the potentially devastating dangers of AI, rejecting the label of AI as a “ruiner.” In an essay in January, he argued that “we are much closer to real danger in 2026 than we were in 2023,” but that those risks should be managed in a “realistic and pragmatic way.”

Antropic is at odds with the Trump administration

This isn’t the first time Anthropic’s push for stronger AI protections has put it at odds with President Donald Trump’s administration. Artificial needle chip maker Nvidia has publicly criticized President Trump’s proposal to ease export restrictions to allow some AI computer chips to be sold in China. But the AI ​​company remains a close partner with Nvidia.

President Trump’s Republican administration and Anthropic are also on opposing sides in lobbying for AI regulation in US states.

In October, President Trump’s top AI adviser, David Sachs, accused Anthropic of engaging in a “sophisticated regulatory acquisition strategy based on fear-mongering.”

Sachs responded with an X to Anthropic co-founder Jack Clark, writing about his attempt to balance technological optimism with “appropriate fear” over the steady march towards more capable AI systems.

Anthropic hired a number of former Biden officials shortly after Trump returned to the White House, in part to signal a bipartisan approach. The company recently added Chris Liddell, a former White House official during President Trump’s first term, to its board of directors.

Amos To, senior adviser at New York University’s Brennan Center on Freedom and National Security Program, said the Pentagon’s “astounding” deployment of AI shows the need for increased oversight and regulation of AI by Congress, especially when it is used to monitor American citizens.

“The law has not kept up with the speed at which technology is evolving,” To wrote in a post on Blue Sky. “But that doesn’t mean the Department of Defense has a blank check.”

___

O’Brien reported from Providence, Rhode Island.

David Klepper, Matt O’Brien, Konstantin Tropin, Associated Presss



Source link