OpenAI was prohibited from military use. The Pentagon tested the model through Microsoft anyway

Applications of AI


OpenAI CEO Sam Altman remains in the spotlight this week after his company signed a contract with the U.S. military. OpenAI employees criticized the move after Anthropic’s roughly $200 million contract with the Department of Defense collapsed and called on Altman to release more information about the contract. Altman acknowledged in a social media post that it looked “sloppy.”

While the incident made headlines, it may just be the latest and most public example of OpenAI creating vague policies regarding how the U.S. military accesses AI.

In 2023, OpenAI’s usage policy explicitly prohibits military access to AI models. But some OpenAI employees have discovered that the Pentagon has already begun experimenting with Azure OpenAI, a version of the OpenAI model provided by Microsoft, two of the people said. At the time, Microsoft had a decades-long contract with the Department of Defense. The company was also OpenAI’s largest investor and had broad licenses to commercialize the startup’s technology.

That same year, OpenAI employees witnessed a Pentagon official walking through the company’s San Francisco office, the people said. The people spoke on condition of anonymity because they are not authorized to comment on private sector matters.

Some OpenAI employees were wary of engaging with the Department of Defense, while others were simply confused about what OpenAI’s usage policy meant. Did this policy apply to Microsoft? Sources told WIRED that it was not clear to most employees at the time, but OpenAI and Microsoft spokespeople said Azure OpenAI products are not and were not subject to OpenAI’s policy.

“Microsoft has a product called the Azure OpenAI service that will be available to the U.S. government in 2023 and is subject to Microsoft’s Terms of Service,” spokesperson Frank Shaw said in a statement to WIRED. Microsoft declined to comment specifically on when it made Azure OpenAI available to the Department of Defense, but noted that the service was not approved for “top secret” government workloads until 2025.

“AI already plays an important role in national security, and we believe it is important to have a seat at the table to ensure it is deployed safely and responsibly,” OpenAI spokeswoman Liz Bourgeois said in a statement. “As we approach this effort, we have been transparent with our employees, providing regular updates and dedicated channels for our teams to ask questions and engage directly with our national security teams.”

The Department of Defense did not respond to WIRED’s request for comment.

OpenAI updated its policy by January 2024, lifting the complete ban on military use. Several OpenAI employees learned of the policy update through an article in The Intercept, sources said. Company leaders then addressed the changes in an all-hands meeting and explained how the company would carefully approach this area going forward.

In December 2024, OpenAI announced a partnership with Anduril to develop and deploy AI systems for “national security missions.” The same person said that prior to the announcement, OpenAI told employees that the scope of the partnership was narrow and would only work on unclassified workloads. This was in contrast to an agreement Anthropic had with Palantir under which Anthropic’s AI would be used for sensitive military operations.

An OpenAI spokesperson confirmed to WIRED that Palantir approached OpenAI in the fall of 2024 about joining the FedStart program. The company ultimately refused, telling employees the risk was too high, two people familiar with the matter told WIRED. However, OpenAI is currently working with Palantir in other ways.

Around the time the Anduril deal was announced, dozens of OpenAI employees joined a public Slack channel to discuss concerns about the company’s military alliance, sources said and a spokesperson confirmed. Some believed the company’s model was too unreliable to handle users’ credit card information, let alone assist Americans on the battlefield.



Source link