Government officials have expressed numerous concerns about AI being used to enable foreign and domestic actors to launch malicious attacks against the U.S. Concerns range from using artificial intelligence to develop biological weapons to helping the government harness advanced technology for the use of nuclear weapons.
Since President Biden signed the Executive Order on AI, which aims to build a global consensus on AI safety, standards, governance, and testing, U.S. government agencies have been strengthening their AI skills, knowledge, and expertise. The Executive Order mandates several requirements for agencies to comply with, including:
- Develop standards, tools, and tests to ensure AI systems are safe, secure, and trustworthy.
- Safeguard against the risks of using AI to engineer dangerous biological materials by developing powerful new standards for biological synthesis screening.
- Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.
- Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job losses, labor standards, workplace fairness, health and safety, and data collection. These principles and best practices aim to benefit workers by providing guidance to prevent employers from underpaying workers, unfairly evaluating job applications, or affecting workers' ability to organize.
Top-level concerns
A recent panel hosted by the nonprofit Center for a New American Security (CNAS) featured three government officials who highlighted some of the current concerns the US has when thinking about AI.
Michael Kaiser, assistant secretary for policy, strategy and analysis at the Department of Homeland Security's Office for Countering Weapons of Mass Destruction, said last year's “frenzy” around AI, particularly about its catastrophic risks, has subsided somewhat, and he attributed that to greater involvement of experts and the role of the work the government is doing.
Kaiser pointed to a report released last year by the Department of Homeland Security that said the discussion has moved away from hysteria and become more focused.
One of the findings in our report, the first finding in fact, is to develop consensus among the various communities — the national security community, the public health, the science, and even the food and agriculture community — to understand the actual level of risk based on scientific principles and to understand the capability of an adversary to use biological weapons to attack the homeland.If you look at our report, the first finding is to develop consensus on the language to use.
Alexandra Seymour, staff director for the House Homeland Security Committee's Cybersecurity and Infrastructure Subcommittee, said it's difficult to get consensus on AI in Congress, pointing to the plethora of bills and focus areas because AI touches every sector of the economy. But for the House Homeland Security committee, the driving force is pragmatism and figuring out how to leverage existing regulations.
The way we're approaching this is pragmatic. What we're trying to think about is, there are laws that are already in place. Let's examine those. Let's look at where there are gaps, where there are needs, where we need to identify areas where there are real risks and areas that the existing laws don't cover.
I think Congress is taking a step back and, like I said, really trying to understand where the gaps are…trying to understand the specific applications and making sure they're really thinking about what the risks are.
Similarly, Seymour wants the U.S. to acknowledge that there are risks it must accept, but to better understand what those risks are so it can manage them without limiting progress on AI development.
We continue to think about this tension: we don't want to stifle innovation, we don't want to move too fast. We're imposing regulations that are going to burden an entire industry, and we want to be very precise about the definition and scope of the law so that there aren't any unintended consequences.
And I think part of it is understanding exactly where you're willing to accept risk and allow innovation to flourish and allow it to work — and learn from advances in technology and allow room for that to happen.
Speaking from the Office of Emerging Security Issues, a division of the State Department, Foreign Service Officer Wyatt Hoffman said his department's officials are focused on promoting stability, preventing conflict and reducing the risk of unintended escalation, particularly in the context of the international security environment.
Hoffman emphasized that while there are risks, there are also opportunities.
We should not lose sight of the potential benefits of AI in the military, such as improved accuracy and precision in the use of force, and better information for decision-makers to help avoid unintended engagements and escalation.
And the approach his office is taking is to try to build an international consensus on a set of “standards” that would guide how countries go about developing and deploying AI capabilities in the military domain.
These standards are future-proof in the sense that we cannot predict exactly how capabilities will evolve in a year or five years from now, or what military applications they will be used for, but what we can say with confidence is that once nations implement them, put certain processes in place, and begin to build certain technical capabilities, they will have an effect on mitigating the risks of AI development and deployment, regardless of what military applications we're talking about.
Risks to keep in mind
The three administration officials who spoke focused on different areas of concern about whether AI poses serious risks to U.S. national security. Kaiser, for example, is most concerned about the risk of terrorism, particularly the role AI could play in developing biological weapons.
Kaiser points out that this is a challenge for AI, as it tends to pose rewards and threats in equal measure.
When it comes to AI, it's always bio. Some of these AI models, particularly the dual-use capabilities of biological design tools, can advance science across multiple disciplines and have huge potential for cancer research, other types of research, understanding interactions in the human body, how proteins function, and more.
But all of these could potentially be turned into dual-use applications by state or non-state adversaries. In terms of regulation, perhaps the best idea is not to try to regulate AI, but instead to consider some of the regulatory regimes around the physical transition of what might be developed in an in silico AI environment.
Seymour said he's looking at two main risks. First, how will the U.S. ensure that the AI being developed in the U.S. is itself secure? This will depend on assessing the physical infrastructure, the cybersecurity around the centers, and the security of the AI models themselves. The congressional committees that Seymour sits on are particularly concerned about ensuring AI innovators can mitigate the risk of cyberattacks.
Second, the Committee is focusing on how AI can be used to protect U.S. critical infrastructure.
I think this is extremely valuable when you're looking for threat detection, and for those who have been following this space a little bit more closely, we're seeing a significant increase in nation-state activity in the critical infrastructure sector, and we're seeing this space being specifically targeted by Chinese state actors.
One of the things that has government officials sounding the alarm is how threat actors are infiltrating the critical infrastructure sector, using typical techniques like injecting malware, trying to gather information, particularly for espionage purposes, but now they're burrowing into existing infrastructure and ready to launch attacks in the event of an attack on Taiwan or a crisis, etc.
This is very worrying for the authorities. Going back to artificial intelligence, one of the encouraging things that we've seen is that authorities are saying that artificial intelligence is really helping them find some of the attackers hiding in critical infrastructure sectors.
One big concern for Hoffman of the Office of Emerging Security Issues is how the U.S. government will convince other governments that using AI in the employment of nuclear weapons is a dangerous precedent for international security.
And finally, one thing that we spend a lot of time on is the relationship between AI and nuclear weapons in particular, which is obviously where the concerns about the use of AI in the military arena become most acute, particularly in the context of nuclear command and control and communications.
The lack of transparency and common understanding about how different countries are approaching the use of AI in the military sphere is itself a concern and potentially destabilizing.
We support providing transparency and assurances, which is why the United States, along with our allies France and the United Kingdom, have committed to maintaining human control and involvement in all actions essential to inform and implement national decisions regarding the use of nuclear weapons.
And we encourage other nations to make similar commitments — that we should all draw a common line: we will not cross the line into automating nuclear command and control.
My take
There are delicate balancing acts on many fronts and many unknowns, but what is clear is that the national security of the United States and every other nation will face an entirely new set of more complex challenges as a result of advances in AI.