While OpenAI locks down Washington, Anthropic locks down users and rises to the top of the App Store.
Anthropic has been suspended in Washington following a public dispute with the Pentagon over how its AI models were deployed. President Donald Trump ordered federal agencies to phase out the technology.
Meanwhile, OpenAI is breaking new ground, with CEO Sam Altman announcing in a post on X Friday night that it has reached an agreement with the Department of Defense to deploy AI models on sensitive networks.
The OpenAI agreement has some loyal ChatGPT users worried about OpenAI’s ambitions, sparking debate online about its ethical implications and some saying it is trying to defect to rival Claude.
As of Saturday at 6:38pm ET, Claude ranks number one among the most downloaded productivity apps on Apple’s App Store.
BI
Converts have been sharing screenshots documenting their conversions on social media.
Pop musician Katy Perry posted a screenshot of Claude’s pricing page with a red heart around the $20/month “Pro” plan and an X that read “It’s over.”
“I’ve switched,” wrote another X user, Adam Little, along with a screenshot of his email inbox with a receipt from Anthropic and a cancellation confirmation from OpenAI.
On Reddit’s ChatGPT subreddit, dozens of users said they had deleted their accounts and urged others to do the same.
While the phrase “cancel ChatGPT” has become commonplace online, some users have taken a more personal tone, saying Altman’s actions “crossed the line.”
However, this agreement has not polarized all AI users.
In the Reddit thread, several commenters said the news would not affect the selection of AI models, arguing that Anthropic’s work with Palantir raises similar concerns. In November 2024, Anthropic, Palantir, and Amazon Web Services signed an agreement to provide access to Claude models to U.S. intelligence and defense agencies.
After Secretary of Defense Pete Hegseth said he would designate Anthropic as a “supply chain risk to national security,” Anthropic said it would “challenge the supply chain risk designation in court.”
Altman said in a post Friday that the Department of Defense agreed to two safety principles for OpenAI.
“Two of our most important security principles are the prohibition of domestic mass surveillance and human responsibility for the use of force, including autonomous weapons systems,” Altman wrote in X. “The DoW agrees with these principles and reflects them in our laws and policies, and we have incorporated them into the agreement.”
By Saturday afternoon, OpenAI had released a more detailed description of its contract with the Department of Defense, including specific language it used regarding surveillance and the use of autonomous weapons models.
On the topic of autonomous weapons, OpenAI said:
AI systems will not be used to independently direct autonomous weapons where human control is required by law, regulation, or department policy. Nor is it used to make other high-stakes decisions that require the approval of a human decision maker under the same authority.
On the topic of mass surveillance, OpenAI said:
AI systems, as well as these authorities, shall not be used to conduct unrestricted surveillance of the personal information of U.S. persons.
Some chatbot users suggested that all is fair in business, war, and federal procurement, while others suggested that the Department of Defense’s stance may have given Anthropic a public relations victory.
X user Tae Kim joked that Hegseth might need a new title: “Hegseth Secretary to Claude Marketing Director.”
