OpenAI and Google staff sign petition to limit Department of Defense use of AI

Applications of AI


top line

Hundreds of current Google and OpenAI employees signed an open letter expressing support for Anthropic’s refusal to comply with the Pentagon’s demands for unrestricted access to its AI tools, and called on its own leaders to hold the same red line and reject the Pentagon’s demands.

important facts

As of Friday morning, the petition titled “We Will Not Divided” had been officially signed by 266 Google and 65 OpenAI staff members, all current employees.

The letter, citing an Axios report, accuses the Pentagon of going after Anthropic for “a red line that does not allow its models to be used for domestic mass surveillance or to kill people voluntarily without human oversight.”

The letter also notes that the Department of Defense is currently negotiating with Google and OpenAI “to have them agree to what Anthropic has rejected.”

The signatories accused the Pentagon of trying to “divide companies out of fear that the other will give in,” and said, “This letter helps create common understanding and unity in the face of this pressure.”

The petition also calls on OpenAI and Google leaders to “put aside their differences and come together to continue rejecting the Department of Defense’s demands.”

tangent

On Thursday, the New York Times reported that more than 100 Google staffers working on AI signed an internal letter to company executives, raising concerns about the Pentagon’s plans to use the company’s AI tools. A letter sent to Jeff Dean, chief scientist at Google’s AI arm DeepMind, urged the company to go along with Anthorpic’s demands. “Please do everything in your power to prevent transactions that cross these fundamental lines…We love working at Google, and we want to take pride in our work,” the letter reportedly said.

What does Anthropic say about the conflict with the Pentagon?

In a statement released Thursday, Anthropic CEO Dario Amodei emphasized the red lines he would not cross, saying the company “cannot in good conscience comply” with the Pentagon’s request to lift safeguards and allow “any lawful use” of its AI tools. The statement outlined the work Anthropic has done to deploy its model to the U.S. military and intelligence community, but said it believes “the cases are limited and we believe that AI has the potential to undermine, rather than protect, democratic values.” As a result, the Pentagon contract includes two safeguards that prevent the use of AI in “domestic mass surveillance” and “fully autonomous weapons” that do not require human interaction for deployment.

important quotes

“Mass surveillance by AI poses serious new risks to our fundamental freedoms. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with AI’s rapidly growing capabilities,” Amodei said in a statement. He added, “Currently, Frontier AI systems are not reliable enough to power fully autonomous weapons. We will not knowingly deliver products that endanger U.S. warfighters or civilians.”



Source link