Algorithmic Wars: DARPA Hosts Workshop to Develop ‘Trustworthy AI’
Illustration on iStock
The Defense Advanced Research Projects Agency is seeking help in understanding how best to use artificial intelligence for national security.
In June, DARPA will host the first of two workshops with industry, academia, and other government agencies as part of its AI Forward initiative. In this workshop, DARPA aims to bridge the “fundamental gap” between ongoing AI innovations in commercial industries and society. Dr. Matt Chulek, Deputy Director of the Department of Defense’s Information and Innovation Agency, DARPA, said:
“The commercial industry … has invested heavily and developed systems that are, or appear to be, very capable,” Turek said in an interview. “However, these systems may not be a good fit for DoD use cases.”
Current commercial AI systems can presumably be able to handle “low-risk decisions,” but when thinking about mission-critical decisions for the Department of Defense, they cannot and do not experience failure in such cases. Turek said it needs to. You can probably predict and understand in detail how the system will react. “
For example, large language models like ChatGPT are “very compelling” for text generation and document creation, tasks that are “relatively low risk,” he said. But even there, these models begin to fall apart when one considers applying it to “critical areas” such as “examining and summarizing large intelligence reports” for the Pentagon and intelligence agencies.
“There is evidence that they hallucinated information that didn’t necessarily exist, or fabricated quotes from scientific publications that were never written,” he continued. “Such a thing would be cheating in the context of the intelligence analysis process,” he stressed, adding, “What is suitable for commercial use cases and how it currently meets the needs of the Department of Defense?” I emphasized the dichotomy of whether or not there is.
AI Forward will act as an “engagement mechanism” with DARPA’s community, Turek said. The program begins with a virtual workshop from June 13-16, followed by an in-person event in Boston from July 31-August 31. During this period, participants will have the opportunity to brainstorm new directions towards trustworthy AI with applications for national security, according to a DARPA release.
Turek declined to say how many applications DARPA has received for AI Forward, but the participants came from academia, industry and government, and represented a variety of AI-related fields such as theory. , we expect the acceptance rate to be in the range of 25-30%. , his human-centric AI, philosophy and ethics, computer vision, and natural language processing. The goal, he said, is to bring together diverse ideas and backgrounds to “think AI holistically.”
DARPA doesn’t have a specific use case that it hopes the AI Forward event will solve, but it states, “In order to have trustworthy AI and ultimately the kind of AI needed for national security, we need to move forward.” There are three core areas where there are “aims: fundamental AI science, AI engineering, and teaming humans and machines,” Turek said.
For AI science, the community “needs to design an AI system, break it down into parts, and establish an understanding of the scientific principles that allow it to be measured.” [and] Reconfiguring that system will allow us to understand how it works, which will inform the second pillar of AI engineering, he said.
Turek used the analogy of building a bridge, noting that bridges are not built by trial and error, but that current machine learning models are built “a lot by trial and error.”
The way civil engineers “can break down a very big problem into many smaller ones, solve them and then put it all back together and make sure the whole bridge works” is done in AI engineering as well. he should. He said. “We need to be able to take it apart, take measurements on each part of the AI system, reassemble it, and understand how it behaves when fully assembled.”
The third pillar, teaming humans and machines, is something DARPA has been discussing since the 1960s, he said. “How do AI systems construct human understanding and interact in such ways? How do they model human values and reflect them appropriately? Huh?」
Concerns on the subject include not only teaming AI and humans, but also the amount of computing and energy resources required to build effective large-scale AI models, he said. “Computing he resources are important, but that means energy utilization is just as important,” he said. Figuring out the “proper use of resources” for future AI systems will be a challenge, he said.
After the workshop, Turek said DARPA will fund some of the efforts that will come out of AI Forward. According to his AI Forward webpage, “the final outcome of the workshop will be the identification of about 40 promising areas for future research.”
“We are looking for the best and most compelling ideas, and depending on the results, we may be able to scale up or adjust our funding,” Turek said. “What are the compelling ideas we can start funding that could take AI in new directions?”
topic: Department of Defense, Artificial Intelligence, Infotech
