US military will reportedly use Elon Musk’s Grok AI for sensitive systems

Applications of AI


The US Department of Defense has reportedly reached an agreement to use Elon Musk’s Grok in classified systems. Axios. This follows news that the Department of Defense is currently in a dispute with another AI company, Anthropic, over limits on technology for things like mass surveillance.

Last year, White ordered Grok approved for government use along with ChatGPT, Gemini, and Anthropic’s Claude. But until now, only Anthropic models were allowed for the military’s most sensitive missions, such as intelligence, weapons development, and battlefield operations. Claude was reportedly used by the U.S. military in a raid on Venezuela that forced Venezuelan President Nicolas Maduro and his wife to flee.

But the Pentagon required Anthropic to make Claude available for “any lawful purpose,” including mass surveillance and the development of fully autonomous weapons. Anthropic reportedly refused to provide that technology, even if its models had a “safety stack” built into them.

xAI, by contrast, agreed to standards that would allow the Department of Defense to use AI for any purpose it deems “lawful.” But officials don’t believe the xAI model is as cutting-edge or reliable as Anthropic’s Claude, and acknowledge that replacing Claude with Grok will be difficult. The Department of Defense is also reportedly in contract negotiations with OpenAI and Gemini, both of which are considered equivalent to Anthropic.

xAI announced a version of Grok for U.S. government agencies in July 2025. But shortly before that, the chatbot began spewing fascist propaganda and anti-Semitic rhetoric, calling itself “Mecha-Hitler.” All of this followed a public spat between Musk and Trump over the president’s spending bill, after which GSA’s confirmation of Groch appeared to stall. Earlier this week, Anthropic accused three Chinese AI research institutes of exploiting Claude’s AI in a “distillation attack” to improve their own models.



Source link