AI warfare is here in the form of quadcopters and high-tech drones.

AI News


President Trump’s historic dismantling of the Iranian regime is unfolding with dizzying speed, showcasing the next generation of AI warfare. Far from replacing human judgment, the U.S. military’s use of AI in Iran has largely focused on processing vast amounts of information, one of the oldest and most vexing problems in war. AI is helping commanders refine target selection, sift through intercepted communications, conduct combat damage assessments, reduce the time needed to identify and eliminate terrorist targets, and reduce collateral damage.

In the following exclusive excerpt from his new book Code Red: Left, Right, China, and the Race to Control AI (HarperCollins), author Wynton Hall reveals how AI warfare and autonomous weapons are enhancing America’s ability to achieve peace through force in ways that are reshaping warfare for the age of AI.

Kargu-2 is a quadcopter capable of autonomous navigation and precision attack. STM

In March 2020, on the battlefields of Libya, civilization may have crossed an eerie threshold. Turkish-made autonomous drones reportedly “cornered and … engaged” retreating forces loyal to General Khalifa Haftar without human guidance.

According to a UN-commissioned report, these autonomous lethal weapons were “programmed to attack targets without the need for a data connection between the operator and the weapon; in effect, a true ‘shoot, forget, find’ capability.”

It was not a theoretical scenario devised by military analysts or ethicists. It wasn’t even a scene from a Hollywood sci-fi thriller about murderous robots. It was a real-life event in which a machine independently selected and attacked human targets.

The Bullfrog autonomous weapon can be mounted on a pickup truck. allen control system

The weapon in question wasn’t a crude hobby drone with a camera attached to it with duct tape. It was a quadcopter loitering weapon called Kargu-2 manufactured by Turkish defense company STM. Kargu-2 supports multiple warhead configurations and provides precision strikes with autonomous navigation and flight control. It also has a day and night automatic target recognition system.

In the words of West Point researchers, it is “designed as an anti-personnel weapon that can select and attack human targets based on machine learning object classification.”

STM CEO Murat İkinci said the Kargu-2s are equipped with facial recognition technology and can operate in swarms of up to 20 aircraft for coordinated attacks.

It remains unclear whether the conflict with Libya resulted in any loss of life, but drone warfare expert Zachary Callenborn, writing in the Bulletin of the Atomic Scientists, suggested that the UN report “strongly suggests” that it did. If so, he said, it would be “a new chapter in which autonomous weapons will be used to fight and kill humans based on artificial intelligence.”

Libyan Lieutenant General Khalid Haftar AFP (via Getty Images)

If the Libyan incident offered a glimpse of the possibility of an autonomous war, Israel’s response after Hamas’s massacre of 1,200 innocents on October 7, 2023, demonstrated the real-world, near-future capabilities of AI on the battlefield. The Israel Defense Forces (IDF) has deployed three AI systems: “The Gospel,” “Lavender,” and “Where’s Daddy?” – Collectively identified terrorists for rapid elimination.

The Gospels compile a list of potential terrorist buildings. Lavender scoured the mountain’s surveillance data, including images and phone records, to create and rank the kill list. The menacingly named “Where’s Daddy?” used mobile phone signals to track enemies to their homes as a means of identifying themselves before being crushed by airstrikes.

AI systems can use surveillance data to create and rank kill lists and quickly gather a list of potential terrorist buildings. AFP (via Getty Images)

The combination of these three AI systems dramatically sped up target acquisition and kill chain protocols. As former IDF General Counsel Tal Mimran put it, previously, “to collect 200 to 250 targets, a team of about 20 agents would have to work for about 250 days. Today, AI would do it in a week.”

This reality draws a line under another fundamental disconnect between left and right. Because of the left’s leanings toward materialism and utopianism, leftists often believe that conflicts are best resolved through communication, harmony, and disarmament.

If we all try a little harder, we can create heaven on earth. When real evil exists, leftists tend to think it’s their fault.

The right-wing assumes the opposite and always maintains a skeptical attitude toward those in power. Because the right wing believes that evil exists and cannot be solved by human means. Nevertheless, we also believe that America represents good people. Because it does.

The first unmanned aerial vehicle battalion will be launched in 2025. AFP (via Getty Images)

Throughout this chapter, we will see how the left’s constant instinct to demonize us and downplay the threat of our enemies may hinder America’s preparation for a coming world of AI terrorism.

The new AI battlefield: context and stakes

The overarching impact of the AI ​​revolution will extend beyond everyday concerns such as work and education. It will shape how the United States fights wars and maintains national security.

The United States has always relied on cutting-edge military technology to defeat our enemies and protect our people. As weapons evolve, military operations and intelligence must also evolve to defeat adversarial nations armed with AI weapons and build next-generation systems that will strengthen America’s advantage on future battlefields.

The recent increase in AI spending highlights the urgency. Federal government AI-related contracts jumped nearly 1,200% in one year, from $355 million in 2022 to $4.6 billion in 2023.

A new generation of AI warfare is here. But it doesn’t look like a killer robot from an action movie. adobe stock

This surge was primarily due to increased Department of Defense (DOD) spending. The Pentagon’s AI contracts alone more than doubled during the same period to more than $550 million.

This surge does not mean that traditional weapons such as tanks, fighter jets, and destroyers will be scrapped. Rather, it reveals how AI is being integrated into current and future defense programs to maintain battlefield superiority. As the Israeli example illustrates, some applications of AI and machine learning are focused on helping soldiers quickly sift through large amounts of data and information to find the intelligence needle in the proverbial haystack. Other applications relate directly to autonomous weapons. With the rapid adoption of AI around the world, U.S. defense planners know that adversaries are gaining access to lethal AI weapons and surveillance systems. And, like most technologies, the economic cost continues to fall, making it increasingly affordable and unprecedentedly lethal for rogue states and terrorists.

For example, consider Bullfrog, an AI-enabled autonomous robotic gun system with a 7.62 mm M240 machine gun mounted in a smart turret.

With the rapid adoption of AI around the world, U.S. defense planners know that adversaries are gaining access to lethal AI weapons and surveillance systems.

AI machine guns provide superior small arms firepower against drone targets than the average military personnel. Another advantage is the relatively low cost. But its affordability means similar AI systems will increasingly fall into the wrong hands.

Bottom line: The democratization of lethal AI weapons means that technologies that were once the exclusive domain of superpowers are increasingly available to a large number of actors, both state and non-state.

Leaders on both sides of the aisle seem to understand the importance of this moment. Sen. Mark Warner (D-Virginia) warned that the proliferation of these technologies “drastically lowers the barriers to entry for foreign governments to apply these tools to their military and intelligence services.” Even more concerning, he notes, many of these AI innovations are developed and released by U.S. companies and are only reused by foreign militaries and intelligence agencies. Reverse engineering of US weapons is nothing new. But as AI-powered systems become cheaper and more lethal, the cost of human slaughter could rise significantly.

This reality means that AI will dramatically impact how we defend our countries, fight and win wars. President Vladimir Putin, no friend of American interests, openly declared: “Artificial intelligence is the future not only for Russia, but for all of humanity. Artificial intelligence brings with it immense opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this field will be the ruler of the world.”

Conservatives have always understood that peace comes from strength, not weakness. There’s a reason this principle has guided national security policy from President Ronald Reagan to President Trump. Because it works.

President Vladimir Putin has called AI “the future of all humanity.”

As President Reagan said, “We know that wars are fought not when the forces of freedom are strong, but when they are weak. That is when tyrants are tempted.”

This wisdom applies perfectly to the AI ​​threat vectors we face today. As former British Prime Minister Margaret Thatcher reminded the world in her eulogy for Reagan, the Gipper’s decision to rebuild the U.S. military gave our country the technological advantage needed to win the Cold War “without firing a shot.”

President Trump similarly emphasized military force as a path to peace. As he said in his first farewell address, he was “particularly proud to be the first president in recent decades not to start a new war.”

The lesson is clear. Equipping our soldiers, sailors, airmen, Marines, and guardians with world-class training and weapons leads to peace.

As these systems become cheaper, more powerful, and more widely available, we must apply the same determination in deploying AI to gather intelligence, enhance cybersecurity, improve battlefield readiness, and combat enemy AI weapons attacks. Specifically, U.S. leaders must face at least four core national security challenges in the AI ​​era.

1. Autonomous arms race

2. Rise of terrorism using AI

3. The dangerous gap between Silicon Valley innovation and national security needs

4. AI coordination issues and containment risks

These are not the only threat vectors posed by AI. But how we deal with them greatly affects our ability to maintain our peacemaking powers. If we lose our military advantage, we will usher in a chaotic threat matrix characterized by low-cost, large-scale AI-powered attacks.

Excerpt from “Code Red: Left, Right, China, and the Race to Control AI” Written by Winton Hall. Copyright 2026 by Winton Hall. Published with permission of Broadside Books and HarperCollins Publishers.



Source link