OpenAI faces backlash after deal with Pentagon AI leads to resignations and user exodus

AI News


Washington, March 10 (SANA) OpenAI is facing growing criticism from employees, users and industry rivals after signing an agreement with the U.S. Department of Defense that allows its artificial intelligence technology to be used for military and security operations.

The controversy intensified over the weekend after Caitlin Kalinowski, the company’s senior head of robotics, resigned in protest of the deal. Critics say the agreement could enable the use of advanced AI systems in warfare and potentially domestic surveillance.

Kalinowski said her decision was motivated by broader concerns about the role of artificial intelligence in national security.

“I care so much about the robotics team and the work we have built together,” she wrote on social media platform X. “But surveilling the American public without judicial oversight and granting lethal autonomy to combat systems without human permission are issues worthy of much broader discussion.”

OpenAI, whose AI deal at Pentagon sparks resignation and faces backlash over user exodus, secured the defense contract late last month, shortly after rival AI company Anthropic turned down a similar offer from the Pentagon. CEO Sam Altman announced the agreement on February 28, giving the U.S. military access to some of the company’s AI models.

Anthropic CEO Dario Amodei had previously rejected a comparable deal, citing concerns that the technology could be used for fully autonomous weapons or large-scale domestic surveillance.

Amodei said at the time: “I cannot in good conscience consent to their request.”

OpenAI defended the agreement, saying it includes safeguards and “clear lines” governing how its technology is used. A company spokesperson said the agreement provides “a practical path to responsibly use AI in national security,” including limiting domestic surveillance and banning fully autonomous weapons.

Still, there is widespread criticism within the company. OpenAI researcher Aidan McLaughlin told X that he did not think the agreement was justified.

Other employees expressed praise for Anthropic’s refusal to sign a similar contract. One OpenAI staffer told CNN that many of his colleagues “really respect” the rival company’s stance.

There is also widespread backlash among users. Critics on social media are calling for a boycott of ChatGPT and encouraging others to switch to Anthropic’s Claude chatbot.

Data suggest that this response had an immediate impact. According to the analysis, on February 28, the day after the agreement was announced, uninstalls of the ChatGPT app spiked by more than 295%.

By Monday, Anthropic’s Claude was the most downloaded free app on Apple’s U.S. App Store, beating out ChatGPT, Google’s Gemini and Grok, the chatbot developed by Elon Musk’s xAI, and remained at the top for several days.

In the face of mounting criticism, Altman attempted to calm the backlash by answering questions from users about X and acknowledging that the deal was rushed.

“It was definitely rushed and it didn’t give a good impression to the public,” he said.

OpenAI later announced that it would revise its agreement with the U.S. government after criticism that the agreement appeared “opportunistic and sloppy.”

Altman said the new language explicitly prohibits the use of the company’s systems to spy on Americans. Under the revised terms, intelligence agencies such as the National Security Agency will also require additional contractual approvals before using OpenAI technology.

The debate comes as artificial intelligence plays an increasingly important role in military operations. AI technology is already being used to manage logistics, analyze information, and process large amounts of battlefield data.

The United States, Ukraine and NATO rely on tools developed by U.S. data analytics firm Palantir, whose AI-powered defense platform Maven integrates satellite imagery, intelligence reports and other military data.

Louis Mosley, Palantir’s head of UK operations, said such systems help commanders make “faster, more efficient, and ultimately more lethal decisions when necessary.”

But experts warn that large-scale AI models can still produce errors and fabricated information, known as “hallucinations,” raising concerns about their reliability in high-stakes military situations.

advertisement



Source link