UI professor’s paper explores the use of AI in the military

Applications of AI



As artificial intelligence continues to advance, a University of Iowa professor is leading an international discussion on how to ethically deploy and regulate this technology in military applications, where artificial intelligence continues to play an important role.

jovana davidovichA UI associate professor of philosophy, she published a paper in 2016 exploring the use of AI in military technology. nature machine intelligenceOctober 13 issue of a scientific magazine focused on AI research.

Davidovich’s findings were announced after February Associated Press reporting Israel was found to be using AI tools such as OpenAI and Microsoft software to analyze surveillance and intelligence data during military operations in Gaza and Lebanon.

Experts in the report expressed concern that decisions around targeting of combatants could become a fully automated process powered by AI, raising ethical concerns about potential mistakes and accountability for harm to civilians.

The UI itself is actively involved in defense-related research. $982,705 grant Research on detonation of high-energy materials for hypersonic vehicles and $9 million interdisciplinary university research initiative Research on environmental films for defense grade materials.

Davidovich’s paper emphasizes the idea of ​​ensuring that soldiers have meaningful human control, a principle that guarantees humans the final say on when to use lethal force when using autonomous weapons.

“The problem is that very often the idea of ​​meaningful human control essentially boils down to needing to make sure the human last pressed the button,” Davidovich said.

Davidovich said the idea of ​​meaningful human control is unrealistic in the face of modern AI-assisted warfare. Many autonomous systems, such as those on naval ships, are defensive, and if 300 enemy drones were detected, it would be impossible for a human to press the confirm button to authorize each shot, she said.

Davidovich said meaningful human control is a common principle among policymakers who argue that fully autonomous weapons powered by AI violate human dignity.

In a report published in April, Human Rights Watch It explains how fully autonomous AI weapons systems undermine human dignity by delegating life and death decisions to machines.

One example Davidovich cited is loitering weapons, also known as kamikaze drones. Davidovich said the drones have been used by Israel in the Israel-Hamas war and by Ukraine in the Russian-Ukrainian invasion.

Davidovich said operators can draw boxes on the screen to assign areas of responsibility for the AI-equipped drone to find targets. Once the drone finds a target, it can autonomously carry out an attack or ask the operator to continue.

“Whether human involvement makes things safer is just an empirical question,” she says. “We can’t always say that having a human involved makes it safer. Sometimes it makes it safer, and sometimes it doesn’t.”

In his paper, Davidovich instead proposed the concept of good human judgment, and said organizations such as the United Nations and NATO have been working towards this principle for the past three to four years.

Human Good Judgment is a principle that advocates human involvement at every point in the lifecycle of an AI weapon system.

From design to deployment to use, it requires steps such as engineers having discretion to build AI and militias purchasing systems to properly test weapons.

“Good human judgment is about the life cycle,” Davidovich said. “Everyone involved in the lifecycle must play their part.”

Davidovich received a $1 million grant from the Norwegian Research Council to begin developing an ethical risk management framework that advocates good human judgment and helps international policymakers determine the ethical risks of AI weapons.

She started developing the framework in March and plans to continue until June 2028.

“I want to put on my philosopher-professor hat and use my real-world consulting experience to provide useful guidance to defense contractors,” she said. “This will ensure good human judgment and minimize harm to civilians.”

Davidovich said the world had killed too many civilians since World War II, despite the touted increasingly accurate weapons.

“It’s not just a matter of more conflict,” she says. “We just don’t do a good job of protecting civilians during wartime. Good human judgment and better AI weapons show that analysis at the end of a war outperforms humans alone in minimizing human casualties.”

Gabe HarrisThe president of Applied AI, a UI student organization that helps students explore real-world AI tools and applications, said that while AI is a useful tool in the everyday world, it performs better when interacting with users rather than standing on its own two legs.

“AI is not perfect. You need human experts to actually question what the AI ​​is saying,” he says. “I think double-checking and fact-checking what the AI ​​outputs are and continuing to question them is what keeps humans in the loop, especially when it comes to military technology.”

Harris said now is the time to start developing an ethical framework like Davidovich’s.

“It’s moving really quickly,” he said. “That fact alone raises concerns when it comes to giving out military technology that could harm another human being. How can you give someone a weapon with confidence if you don’t have confidence in their decision-making?”

captain Kyle HarveyUI’s aerospace research recruiting officer and assistant professor, said he expects AI to become a key element of the U.S. military’s emerging weapons systems.

“With the explosion of technology in the late ’90s and early 2000s, we’re starting to understand that with AI,” he said. “The military has always been one of the first adopters of emerging technologies, and in my opinion, it will also be one of the first to fully utilize AI.”

Harvey served as an intelligence officer before being assigned to Detachment 255, the UI’s Reserve Officer Training Unit.

Although he could not comment on how AI will be integrated into general Air Force weapons systems, he sees the integration of AI into the intelligence sector as a challenge.

“I operate on a completely separate network and am on the top secret side,” he said. Therefore, there are certain career fields that do not work with the regular internet and do not have access to AI. ”

Harvey said AI’s data processing capabilities exceed what is currently integrated into the Air Force’s intelligence systems. He also said that large-scale AI integration could pose a risk that AI could give out false information or lead to data breaches.

Steve FleegleThe vice president and CIO of IT services at UI was in the audience for Davidovic’s talk on AI ethics. After listening to these presentations, Fleegle said the idea of ​​AI being incorporated into large-scale weapons use is frightening.

“A good ethical position for AI is to keep humans in the loop,” he says. “AI can make mistakes, AI can have biases, and you need a human there to balance that out. So I 100 percent agree that keeping the human perspective there should be at the center of the discussion.”

Fleegle said advances in AI are happening at a rapid pace, and the general public is also becoming more interested in it, as evidenced by the more than 2,000 people who have participated so far. HawkAI course Used in UI since launch in Fall 2024.

Throughout this progress, Fleegle said it is important to keep ethical considerations at the forefront of AI discussions.

“When people think about AI, they need to think about ethics, because some people get hooked on the technology,” he says. “We strive to raise awareness across the board and make the course accessible to everyone.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *