The Army is eager to incorporate civilian AI algorithms into its operations, and hopes that industry can also figure out how to address the security concerns that will inevitably arise from the move.
Young Van, principal deputy undersecretary of the Army for acquisition, logistics and technology, told an audience at an Amazon Web Services summit in Washington, D.C., last week that it would be a waste of time for the Army to remake a system that private companies have already built.
The Army has so much data that today's machine learning systems can process it, but the algorithms to do the work have already been developed or are in the process of being developed, so U.S. troops are better off cutting out the hassle and using them, Vann said.
“We have a ton of data, but we're not going to develop algorithms that are better than you,” he said at the summit. “We want to adopt third-party generated AI algorithms as fast as you can build them.”
Of the six branches of the U.S. military, the Army is the largest user of AI and algorithms, Van said. That's because, unlike Navy ships and Air Force planes, “our resource is people.” Because personnel generate a lot of data, the Army will be the largest user of machine learning software to process all that information. That's why Green Machine is turning to industry algorithms as a shortcut to process its analytics.
Vann also spoke in a breakout session that provided more information about the Army's technology posture regarding AI.
Vann said the Army has finally reached the point where it has fully adopted modern software practices such as agile development and CI/CD (also known as continuous integration and continuous delivery). With those practices effectively in place, the Army hopes it can combine them with artificial intelligence to help with data processing.
Addressing risks
In addition to the usual biases and illusions that arise from neural networks, there are also security concerns, depending on the network's architecture and how it's used, ranging from skipping safety guardrails with special prompts to standard application security issues. These should be taken into account before plugging these models into programs that handle sensitive information or help recommend actions that could be life or death.
Rather than assessing the risks itself, the Army believes it can issue a request for information (RFI) and get the answers it needs from the private sector.
“The Army is saying we need your help,” Vann told the summit audience. “We're trying to overcome issues that may prevent the adoption of algorithms created by third parties.”
“We want them to identify specific controls and come back and say, 'Here are some processes and tools that we have,'” Vann added. The Army acquisition chief noted that the Army is particularly focused on putting mechanisms in place to reject tainted or booby-trapped data or models that, among other things, could cause the system to malfunction unexpectedly. That data could be in a training set or provided during inference.
It wasn't immediately clear when the RFI would be available for industry comment, and the Army did not respond to questions. A spokesperson told Washington Technology that multiple AI-related RFIs are expected to be submitted in the coming months, and the RFI Vann spoke of would be released no later than the end of August.
Defining risks to the Army's use of machine learning is a priority project this year, and the 100-day AI plan released in April is focused on defining barriers to adoption. Vann said that once the 100-day plan is complete, the Army will move into a 500-day plan to operationalize everything it learns from these risk assessment efforts.
The 500-day initiative will include an effort that Bang calls “BreakAI,” though he did not elaborate on what that means.®