A knowledge advantage can save lives, win wars, and avert disasters. At the Central Intelligence Agency, basic artificial intelligence (machine learning and algorithms) has long served its mission. Now, generative AI is joining the effort.
CIA Director William Burns has said that AI technology will augment humans, not replace them. Nand Mulchandani, the agency's first chief technology officer, manages the tool. There's a lot of urgency. Adversaries are already spreading AI-generated deepfakes with the goal of undermining U.S. interests.
Mulchandani, a former Silicon Valley CEO who led successful startups, was appointed to the position in 2022 after working at the Department of Defense's Joint Artificial Intelligence Center.
Projects he oversees include generative AI applications like ChatGPT that leverage open source data (meaning data that is unclassified, publicly available, or commercially available). It is used by thousands of analysts in the intelligence community at his 18 agencies in the United States. His other CIA projects using large-scale language models are understandably kept secret.
This Associated Press interview with Mulchandani has been edited for length and clarity.
Q: You recently said that we should treat generative AI like a “drunk, crazy friend.” Could you please tell me more details?
A: When these generative AI systems “hallucinate”, they can act like your drunk friend at a bar says something that goes beyond the boundaries of normal concepts and evokes unconventional thinking. there is. Please note that these AI-based systems are probabilistic in nature and therefore not accurate (and susceptible to fabrication). Therefore, these systems are great for creative work such as art, poetry, painting, etc. However, I have not yet used these systems to perform precise calculations or design airplanes or skyscrapers. In these tasks, “close enough” just doesn't work. They can also be biased and narrowly focused, what I call the “rabbit hole” problem.
Q: Currently, the only large-scale language model I know of in use at the CIA on an enterprise scale is an open source AI called Osiris that was created for the entire intelligence community. Is that correct?
A: That's all we're publicizing. It was an absolute home run for us. However, we need to expand the discussion beyond just LLM. As an example, we process large amounts of foreign language content across multiple media types, including video, and use other AI algorithms and tools to process it.
Q: The Special Competition Research Project, a powerful advisory group focused on AI in national security, says U.S. intelligence agencies need to rapidly integrate generative AI given its disruptive potential. A report has been published. The plan sets out his two-year timeline for “deploying Gen AI tools at scale” beyond experiments and limited pilot projects. do you agree?
A: The CIA is 100% committed to leveraging and expanding these technologies. We probably take this issue as seriously as we take any technology issue. Since we are already using the Gen AI tools in production, we believe we are well ahead of our original schedule. The deeper answer is that we are in the early stages of a huge number of additional changes, and a large part of the work is integrating technology more broadly into applications and systems. It's still early days.
Q: What are the names of the large language model partners?
A: I'm not sure if it's interesting to name the vendor at this point. There is an explosion of LLMs available on the market today. As a prudent customer, we do not intend to tie our ship to any particular LLM set or any particular vendor set. We have evaluated and used nearly every HighRunner LLM out there, both commercial grade and open source. We do not view the LLM market as a singular market where one lab is better than another. As you have noticed in the market, models are constantly evolving as new products are released.
Q: What are the most important use cases for large-scale language models at the CIA?
A: Mainly summaries. It is impossible for the CIA's open source analysts to digest all the media and other information we collect on a daily basis. Therefore, this has been a game changer for sentiment and global trend insights. Analysts then dig into the details. They must be able to reliably annotate and explain the data they cite and how they arrived at their conclusions. Our tradecraft has not changed. Both the confidential and open source information we collect gives analysts a broader perspective.
Q: What are the biggest challenges in adopting generative AI in government?
A: There's not a lot of cultural resistance within the company. Our employees work on AI every day to gain a competitive edge. Clearly, the whole world is hooked on these new technologies and amazing productivity gains. The key is to address constraints on how information is partitioned and systems are built. Data segregation is often done for legal reasons rather than security. How can we efficiently connect systems to reap the benefits of AI while maintaining full functionality? We've thoroughly considered this problem and combined data in a way that maintains encryption and privacy controls. There are some really interesting technologies emerging that can help us do that.
Q: Generative AI is now as sophisticated as an elementary school student. In contrast, espionage is for adults. It's all about trying to break through the enemy's deception. How does Gen AI fit into the job?
A: First, let me emphasize that human analysts have an advantage. We have world-leading experts in their fields. And much of the information that comes in involves a tremendous amount of human judgment, including that of the individual providing the information, to determine its importance and significance. We're not letting machines reproduce any of that. Nor do we want computers to do the work of domain experts.
The model we're looking at is the co-pilot model. We believe that Gen AI can greatly impact brainstorming and generate new ideas. and improve productivity and insight. When harnessed properly, these algorithms can be a force for good, so we need to be very decisive about how we do it. But if you use it incorrectly, you can really hurt yourself.