The battle between humanity and the Pentagon shows how big technology has reversed the course of AI and war | AI (Artificial Intelligence)

AI For Business


Anthropic’s standoff with the Department of Defense is forcing the tech industry to revisit the question of how its products are used in war and what the lines are that they shouldn’t cross. As Silicon Valley moves to the right under Donald Trump’s administration and lucrative defense contracts are awarded, the answers from big tech companies are looking very different than they were less than a decade ago.

Anthropic’s feud with the Trump administration intensified three days ago when the company sued the Pentagon, alleging the government’s decision to blacklist it from government service violates its First Amendment rights. The company and the Pentagon have been at odds for months, with Anthropic seeking to bar its AI models from being used for domestic mass surveillance or fully autonomous lethal weapons.

Anthropic argued that succumbing to the Pentagon’s request to allow “all lawful uses” of its technology would violate its founding safety principles, expose its technology to potential misuse, and threaten the ethical boundaries that other companies in the industry must decide whether to cross.

While Anthropic’s refusal to remove security guardrails and the Pentagon’s subsequent retaliation highlighted long-standing concerns about the use of AI in conflict, the battle also showed how far the goalposts have moved regarding ties between big tech companies and the military.

“If people are looking for a good person or a bad person, a good person is someone who doesn’t support war,” said Margaret Mitchell, an AI researcher and chief ethical scientist at tech company Hugging Face. “Then they won’t find it here.”

Anti-military protests against military contracts

There are a number of factors that have led to the new embrace of militarism by big technology companies. Alignment with the Trump administration has linked tech companies to the government’s desire to expand its military power, with key CEOs expressing loyalty to the president. The administration’s pledge to overhaul federal agencies using artificial intelligence signals a concrete opportunity for AI companies to integrate their products into government and military operations in ways that will ensure profitability for years to come. The industry’s attitude is changing, partly due to concerns about China’s technological advances and the rapid increase in international defense spending.

But it wasn’t that long ago that collaborating with the military on potentially harmful technology was considered a red line for employees at many major technology companies. In 2018, thousands of Google employees began protesting a program called Project Maven that analyzes Pentagon drone footage.

“We believe Google should not be in the business of war,” more than 3,000 employees said in an open letter at the time. Google decided not to update Project Maven in the wake of the protests and announced a policy prohibiting the pursuit of technologies that could “cause or directly promote harm to humans.”

But in the years since the Project Maven protests, Google has cracked down on its employees, removed language from a 2018 policy banning the development of weapons technology, and signed numerous contracts that allow the military to use its products. In 2024, the tech giant fired more than 50 employees in response to protests over the company’s military ties to the Israeli government. CEO Sundar Pichai sent a memo to employees after the firings, saying Google is a business and not a place to “fight over disruptive issues or debate politics.”

Google just announced this week that it will offer its Gemini artificial intelligence to provide the military with a platform to create AI agents to work on unclassified projects.

OpenAI also had a complete ban on military access to its models before 2024, and the company’s chief product officer is still a lieutenant colonel in the U.S. military’s Executive Innovation Command. The startup, along with Google, Anthropic and xAI, signed a deal worth up to $200 million with the Department of Defense last year to integrate its technology into military systems. On the same day that Secretary of Defense Pete Hegseth declared Anthropic a supply chain risk, OpenAI signed an agreement with the Department of Defense that allows its technology to be used on sensitive military systems.

Elsewhere in the tech industry, hawkish companies like Anduril, a defense technology company founded in the year before the Google Maven protests, and surveillance technology maker Palantir, have made their alliances with the Pentagon a cornerstone of their business and are trying to sway Silicon Valley politics toward their worldview. Palantir has pioneered collaboration with the military, contracting with military intelligence to map explosives planted in Afghanistan in the early 2010s. Last year, CEO Alex Karp published a book largely devoted to advocating for closer integration of the tech industry, AI, and the U.S. military, in which he accused Google employees who protested in 2018 of being nihilist.

After Google terminated the Project Maven contract in 2019, Palantir took over the contract. According to the Washington Post, Maven is now the name of a secret system used by military personnel to access Anthropic’s Claude.

humans go to war

While Anthropic has won national praise for its standoff with the Pentagon, its co-founder and CEO Dario Amodei emphasized that AI companies and governments want much the same thing.

“Antropic has far more in common with the Department of the Army than differences,” Amodei wrote in a blog post last Thursday.

Although the White House has denounced Anthropic as a “woke company of the far left,” Amodei’s views on concerns about the use and abuse of AI in conflict are far from arboreal pacifism. In a lengthy essay published in January, Amodei warned of the potential harms of AI, including the creation of deadly biological weapons and the threat of China using the technology for malicious purposes. At the same time, he argued that companies should equip democratic governments and militaries with the most cutting-edge AI possible to combat authoritarian adversaries.

He expressed concerns less about AI making it easier to kill humans and wage war than about the reliability of the technology and the threat of its integration with too few people “pushing the buttons” who can control armies of autonomous drones.

Amodei’s essay also hinted at some of the central issues involved in the fight with the Pentagon, including the potential of AI as a tool for mass surveillance. While advocating for a bulwark against the misuse of AI, he said it was his view that the technology could be used for national defense “in any way except in ways that bring us closer to authoritarian adversaries.”

Mr. Amodei has maintained his company’s red lines, but has repeatedly said he wants Anthropic to continue working with the Department of Defense. The company’s lawsuit against the Department of Defense shows how extensively the company is willing to work with the military and modify its products for use there.

Anthropic’s California lawsuit states that “Anthropic does not impose the same restrictions on the military’s use of Claude as it does for its civilian customers.” “Governor Claude is less likely to deny requests that would be prohibited in a civilian context, such as using Claude to handle classified documents, military operations, or threat analysis.”

The government is reportedly using Claude for target selection and analysis in its bombing campaign against Iran, but Anthropic has not suggested there are any problems with this use case. In a blog post posted last week on Anthropic’s website, Amodei said he does not believe his company plays any role in the military’s operational decision-making. He asserted that Anthropic remains committed to supporting America’s frontline warfighters and providing them with technology.

“We’ve said to the Department of the Army that all use cases are fine,” Amodei told CBS News last week. “Basically, 98 or 99 percent of the use cases they want to do, except for two, will come true.”



Source link