The US military is actively using artificial intelligence tools to process large amounts of data during military operations against Iran. This was reported by the US Central Command (CENTCOM), which emphasized that new technologies are increasingly influencing modern warfare, according to Bloomberg.
Artificial intelligence plays a key role in the initial selection and analysis of large amounts of information, said Col. Timothy Hawkins, a spokesman for the command. This frees up human analysts to focus on examining more complex information and making decisions.
”Centcom uses a variety of AI tools. It is truly a tool to assist human experts in a rigorous process that aligns with U.S. policy, military doctrine, and law.” Hawkins said, adding that the final targeting decisions are made by humans, not algorithms.
One of the key systems was the Maven Smart System, a mission management platform developed by Palantir Technologies, a source familiar with the operation said. It combines data from over 150 different sources and allows the military to analyze information faster.
The system also uses large language models such as Anthropic’s Claude. According to journalist interlocutors, this tool is well proven and has become an important element of data analysis during operations against Iran.
At the same time, the cooperation between the Department of Defense and Anthropic remains in dispute. After the parties failed to agree on the terms of use of the company’s technology, Secretary of Defense Pete Hagel called it a risk to the supply chain and gave military contractors six months to stop working with the company’s technology. President Donald Trump has also called the company an “out-of-control radical leftist” and ordered federal agencies to stop working with the company.
Amid this conflict, OpenAI was awarded a contract with the U.S. Department of Defense. However, following the exodus of ChatGPT users and a wave of criticism, the company called the decision “hasty” and announced several changes.
The use of AI in military operations is controversial. Some human rights groups, including the Stop Killing Robots Coalition, have warned that AI-based decision support systems could blur the line between algorithmic recommendations and actual use of force.
Also Read: Artificial Intelligence Uses Nuclear Weapons in 95% of Military Simulations – Study
OpenAI considers cooperation with NATO using AI – Media
Share:
