US military signs agreements with seven tech companies to use AI in classified systems :: WRAL.com

Applications of AI


WASHINGTON (AP) — The Pentagon announced Friday that it has reached agreements with seven high-tech companies to use artificial intelligence in classified computer networks, allowing the military to use AI-powered capabilities to support war efforts.

Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection and SpaceX will provide resources to help “enhance warfighter decision-making in complex operational environments,” the Pentagon said.

Notably, AI company Anthropic is not on the list after engaging in a public debate and legal battle with the Trump administration over the ethics and safety of using AI in warfare.

The Department of Defense has rapidly accelerated its use of AI in recent years. A March report from the Brennan Center for Justice said the technology could help militaries reduce the time it takes to identify and attack targets on the battlefield, while also helping to organize weapons maintenance and supply lines.

But AI has already raised concerns that its use could violate Americans’ privacy or allow machines to select targets on the battlefield. One company with a Pentagon contract said its agreement requires human oversight in certain situations.

Concerns about military uses of AI arose during Israel’s wars against militants in Gaza and Lebanon, where the US tech giant secretly gave Israel target-tracking powers. However, the number of civilians killed also skyrocketed, raising concerns that these tools contributed to the deaths of innocent people.

Questions about military applications of AI remain unanswered

Helen Toner, interim executive director of Georgetown University’s Center for Security and Emerging Technologies, said the latest Pentagon contract comes at a time of concern about the potential for overreliance on technology on the battlefield.

“Much of modern warfare is based on people sitting in command centers behind monitors making complex decisions about chaotic, rapidly changing situations,” said Toner, a former director of OpenAI. “AI systems can help in terms of summarizing information and examining surveillance feeds to identify potential targets.”

But questions about human involvement, risk and the appropriate level of training remain unanswered, she said.

“How can we deploy these tools quickly to be effective and provide a strategic advantage?” Toner asked, “while also recognizing that we need to train our operators, make sure they know how to use them, and make sure we don’t put too much trust in them?”

Such concerns were raised by Anthropic. The technology company said it is seeking guarantees in the contract that the military will not use its technology for fully autonomous weapons or surveillance of American citizens. Defense Secretary Pete Hegseth said the company must authorize any use the Pentagon deems lawful.

Anthropic filed the lawsuit after Republican President Donald Trump tried to stop all federal agencies from using its chatbot, and Mr. Claude and Mr. Hegseth sought to designate the company as a supply chain risk, a designation meant to protect against sabotage of national security systems by foreign adversaries.

In March, OpenAI announced an agreement with the Department of Defense to effectively replace Anthropic with ChatGPT in classified environments. OpenAI confirmed in a statement Friday that this is the same agreement it announced in early March.

“As we said when we first announced the agreement months ago, we believe those who protect America deserve the best tools in the world,” the company said.

A company’s agreement with the Pentagon included language requiring human oversight for missions in which AI systems operate autonomously or semi-autonomously, according to a person not authorized to speak publicly about the deal. The text also states that AI tools must be used in a manner consistent with constitutional rights and civil liberties.

These are similar issues for Anthropic, but OpenAI has previously said it secured similar guarantees in its own contract with the Department of Defense.

Department of Defense perspective

Emile Michael, the Pentagon’s chief technology officer, told CNBC on Friday that it was irresponsible to rely solely on one company, acknowledging the friction with Anthropic.

“And when we found out that one of our partners wasn’t willing to work with us the way we wanted, we made sure we were using multiple different providers,” Michael said.

Some companies, such as Amazon and Microsoft, have long worked with the military in classified environments, but it was not immediately clear whether the new agreement would significantly change their partnership with the government. Some companies, such as chipmaker Nvidia and startup Reflection, are new to such efforts. Both companies are developing open source AI models, which Michael described as a priority to provide an “American alternative” to the rapid development of China’s AI systems, with some key components open for other companies to build on.

The Department of Defense announced Friday that military personnel are already using its AI capabilities through the official platform GenAI.mil.

“Warfighters, civilians, and contractors are now fielding these capabilities, cutting many tasks from months to days,” the Pentagon said, adding that the military’s improved AI capabilities “give warfighters the tools they need to act with confidence and protect the nation from any threat.”

Georgetown University’s Toner said the military is often using artificial intelligence in the same way as civilians, taking on mechanical tasks that would take a human hours or days to complete.

AI could be used to better predict when helicopters need maintenance or find ways to efficiently move large numbers of troops and equipment, she said. It also helps determine whether the vehicles included in the drone surveillance feed are civilian or military.

But people should not rely too much on it.

“There’s a phenomenon called automation bias, where people tend to assume that machines will do a better job than they actually do,” Toner said.

___

O’Brien reported from Providence, Rhode Island.

___

Follow AP’s artificial intelligence coverage at https://apnews.com/hub/artificial-intelligence.



Source link