Artificial intelligence has been an important tool for many nations’ militaries over the years.
The war in Ukraine is now driving innovation, and as the conflict drags on, AI’s role in it could become even greater.
Ali Login looks at how the military is using AI today and how it will be used in the future.
Ali: Increasing artificial intelligence on the battlefield has potential, but it also comes with risks.
Congress is urging the Pentagon to invest more in AI and move quickly to avoid falling behind in agile but critical technology.
Paul hails from the new Center for National Security and is a former Army Major, Pentagon employee, and author.
Thank you for your participation.
Artificial intelligence is already used to some extent on the battlefield, but we are not talking about fully autonomous technology.
What is currently available?
What are combatants already using?
And where do you see technology going in the near future?
Paul: That’s right.
We have already seen AI in use on the Ukrainian battlefield.
Humans still control the war.
But one of the things AI is doing is processing information faster.
AI is being used to scrutinize satellite imagery and drone video feeds to help armies better understand what’s happening on the battlefield, make decisions faster, and target enemies faster and more accurately. help you to
Ali: So what if you think these systems are fully automated, that humans have absolutely no control over them? What are its advantages and disadvantages?
We have already seen drones in use in Ukraine. This drone has all the components needed to build a fully autonomous weapon that can fly around the battlefield, find its own targets, and attack those targets without additional Hue. man’s intervention.
And that raises very difficult legal, moral, and ethical questions about human control over the use of force in war, and guides debates about the applications of using these fully autonomous devices.
Do you think more such things will happen?
Also, are there any concerns about how they might be used differently between state agencies and non-state agencies such as terrorist organizations?
Paul: Well, war encourages innovation.
So the longer this war drags on, the more innovations we will see on the battlefield.
We are already seeing innovative uses for drones and counter-drone technology, such as electronic warfare systems that can target drone operators and call for artillery fire.
And that kind of technology pushes toward greater military autonomy.
But it is not limited to nation-states alone.
Ice is actually quite — ISIS actually had a fairly sophisticated drone force a few years ago and was conducting fairly effective drone strikes against Iraqi forces.
Ali: Well, we talked about how AI works.
are used in weapons, but what about non-battlefield systems?
So how does that translate into military use?
Paul: Most of what the military is doing isn’t really right on the tip of spear combat.
Logistics, human resources and maintenance.
People and goods move from one place to another every day.
Much like what Walmart and Amazon are doing.
What is different is what happens in the end.
AI has advantages in all of the other non-combat functions important to military operations.
And if the military can improve its maintenance, logistics, personnel, and financial functions by just 10%, it will ultimately have a huge impact on the military’s ability on the edge of the battlefield.
Ali: Some of what we’re seeing in Ukraine utilizes off-the-shelf technology that can easily be purchased for thousands of dollars.
How does the US Department of Defense respond to this kind of competition that exists?
What are the results?
Paul: They haven’t caught up.
That’s the short version.
They are far behind because their cultures are fundamentally different.
And the bottom line is that you can’t buy AI the same way you buy an aircraft carrier.
The army moves too slowly.
We are stuck in a complicated bureaucracy.
And Pentagon leadership is trying to shake things up.
Last year saw a major reorganization of AI, data and software people within the Department of Defense.
However, no significant changes have been seen since then.
So the Department of Defense will have to find ways to break the red tape and move faster to keep this all-important technology alive.
Ali: Finally, on a global level, as this technology continues to spread, some countries are calling for general road rules.
What is that conversation like?
What is the outline of that discussion?
Paul: We have certainly seen the discussion of lethal autonomous weapons over the last few years, going all the way back to 2014.
There is quite a wide range of opinions on this.
Other countries like the US and Russia have also said we have existing rules.
We have laws of war.
The laws of war apply to autonomous weapons just as they apply to other weapons.
And we need to focus on adhering to these and making sure the use of these weapons complies with the laws of war.
Ali: What about the other side of it?
Who says additional rules are needed and existing rules don’t apply here entirely?
Paul: That’s right.
About 30 countries say they want a pre-emptive, legally-binding treaty banning autonomous weapons before they are built.
But for now, no major military force or robot developer belongs to that group.
Therefore, it does not yet have the political clout to conclude a treaty.
That could change as we see advances in technology. Of course, as technology advances, broader concerns about AI grow, and calls for global regulation of AI grow.
Ali: Thank you very much for joining us, Mr. Paul Schall, from the new Center for American Security.
Paul: Please fix what you got me.