Altman says he opposes applying the Defense Production Act to AI companies

AI For Business


OpenAI CEO Sam Altman cautiously intervened in the dispute between Anthropic and the Department of Defense on Friday.

“The government, the Department of Defense, needs AI models. They need AI partners,” Altman, who is pursuing his own agreement with the military, told CNBC on Friday. “This is clear and I think Anthropic and other companies are saying they understand that.”

Anthropic is working toward a deadline to secure a contract with the Department of Defense to use its Frontier model, the Claude. But Anthropic Chief Executive Dario Amodei said on Thursday that he would not budge on what he called two “red lines”.

In a memo posted on Anthropic’s website, Amodei said he “could not in good conscience comply with their request” to use Claude for domestic mass surveillance or fully autonomous weapons.

“The contract language we received from the Department of the Army makes virtually no progress in preventing mass surveillance of American citizens or the use of Claude in fully autonomous weapons,” Anthropic said in a statement shared with Business Insider on Friday. “Despite DOW’s recent public statements, these narrow safeguards have been at the heart of our negotiations over several months.

The Pentagon previously gave Anthropic an ultimatum to participate or face being blacklisted from government contracts, which would be a significant hit to Anthropic’s bottom line.

A senior Pentagon official told Business Insider on Thursday that Defense Secretary Pete Hegseth is willing to force Anthropic to cooperate with the Pentagon under the Defense Production Act, a 1950s-era law.

Altman said it may be overkill from his point of view, but it is essential for AI companies to collaborate with governments and the military.

“Personally, I don’t think the Department of Defense should threaten DPA against these companies,” he told CNBC. “But I also think it’s important that companies that choose to work with the Department of Defense do so as long as the Department abides by the legal protections and some of the red lines that have been established in this area.”

“Despite the differences with Anthropic, I pretty much trust them as a company and I think they really care about safety,” Altman added.

OpenAI, Anthropic, xAI, and others are all competing to become the government’s model of choice. Anthropic, OpenAI, and Google have all been cleared to handle government information, but so far only xAI’s Grok has been cleared by the Department of Defense to handle classified information.

The Wall Street Journal reported Thursday that Altman said in a memo to staff that OpenAI is pursuing its own agreement with the Department of Defense that “allows us to deploy our models in sensitive environments and is consistent with our principles.”

He added that the effort was aimed at “de-escalation,” an apparent reference to the heated exchange between Antropic and the Pentagon.

OpenAI’s views on working with the military have evolved in recent years. In 2024, the company removed a clause from ChatGPT’s Usage Policy page that prohibited “activities that pose a high risk of physical harm,” including “weapons development” and “military and war,” allowing the company to pursue military contracts. OpenAI also appointed former National Security Director Paul Nakasone to its board of directors in 2024.





Source link