Defense leaders are looking to better governance and risk management as policies around ethical AI take shape.
As the Department of Defense accelerates its use of advanced technologies such as artificial intelligence (AI), the need to build trustworthy AI systems becomes more important than ever, especially when applying these technologies to the military arena. is important.
In 2024, the Department of Defense is seeking $1.8 billion to deploy and deliver AI capabilities. DARPA has been conducting AI research for over 60 years, and over the past few years he has invested over $2 billion in AI advancements.
Recognizing the potential AI brings to the battlefield, defense leaders are increasing the use of this technology to support mission-critical activities, thus driving the adoption of appropriate governance, risk management, regulations, and policies. .
Vice Admiral Kevin Lundy, Coast Guard Commanding the Atlantic Region, said: Maryland National said at his 2023 Maritime and Airspace Conference in Harbor: “When training an officer…the first rule is to have one hand on yourself and the other on the ship….that’s how we think about risk management as we reach for opportunities .”
Because trust is a multifaceted concept, it is difficult to define what constitutes a trustworthy system. Earlier this year, the National Institute of Standards and Technology (NIST) released the AI Risk Management Framework (AI RMF) to help federal agencies develop and deploy AI systems responsibly.
NIST defines a trustworthy AI system in 11 words. It is valid and trustworthy, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced and fair with harmful bias controlled. .
“There are many meanings behind all these 11 words,” says Lunday.
Experts say the road to trustworthy AI systems is long and complex, and includes factors such as making them more resilient to adversary attacks and building the infrastructure to support these systems. Defining what it means to have a trustworthy system and how to measure success is fundamental to this journey.
“Human-machine interaction — it’s the foundation of trust. Can you define what trust means? Said in Sea-Air-. Sky. “What is the level of resources required to build state-of-the-art AI systems? What are the energy and climate implications of meeting those large-scale systems? What do humans need and coordinate? How do you build an AI system that predicts what will happen? These are all major challenges we need to get there and finally have a very reliable AI system. I think.”
The Department of Defense seeks to leverage industry solutions, but defense leaders say there are problems the private sector cannot solve because industry needs are fundamentally different from national security needs.
“I think part of that is because there is a fundamental mismatch between what the industry is doing and what the Department of Defense ultimately needs,” Turek said. “I think it has a lot of fascinating features…but the industry isn’t focused on those kinds of life-or-death issues.They also have access to massive amounts of data and computation. “We work at the Department of Defense, and we care a lot about anomalous events. There is not much training data available.”
Addressing the challenges of defining what a trustworthy system is, or how to measure success, prevents organizations from providing adequate oversight and policy around technology.
“I think one of the challenges from a policy perspective is how to structure regulation properly. I go back to the basic science of how to measure and evaluate AI systems. We don’t have some of that basic science,” Turek said. “You’re not saying that we need to operate this level of trust score in this particular domain, so I think it creates challenges for policymakers.”
Guidance, such as the NIST framework, provides organizations with resources to manage the risks associated with development and deployment and promote responsible use of this technology. In creating this guidance, NIST collaborated with a wide range of experts, including psychologists, philosophers, and legal scholars, to better understand the impact of AI on our real lives.
“Through system design, development, deployment, and regular monitoring, we reach a very broad range of expertise, not only from the technical community, but also from psychologists, sociologists, and cognitive scientists at various stages of the AI lifecycle. It’s very important that it helps us understand the impact of the system,” Elham Tabassi, Chief of Staff at the NIST Institute for Information Technology, told GovCIO Media & Research.