Artificial intelligence and machine learning, HIPAA/HITECH, next generation technology and secure development
Missing: Threat models for defending against attacks in the age of agentic AI
Matthew J. Schwartz (euro infosec) •
March 25, 2026

Artificial intelligence is rapidly reshaping cybersecurity in unexpected ways. The best way to prevent it is still an unanswered question, panelists at the RSAC Conference’s 35th Annual Cryptographers Panel warned.
See also: The future of agenttic AI and automated threats
The rapid rise of AI agents is one of the most important changes facing cyber defenders, Dawn Song, professor and co-director of the Center for Distributed Responsibility Intelligence at the University of California, Berkeley, said during a panel discussion Tuesday.
“These agents can now discover zero-days and vulnerabilities in open source software at scale,” she said. At the same time, such agents are critical to keeping the code development process secure, with forecasters predicting that up to 90% of all new code this year will be generated by AI-enabled code development tools, Song said.
The panel, a highlight of the San Francisco event, highlighted the challenges of defending against AI-powered attacks and discussed the application of differential privacy approaches to secure the use of AI, how to implement encryption within deep neural networks, and ongoing key management challenges including quantum computing.
Adi Shamir, the ‘S’ of RSA Cryptosystems, also cited the “explosion of agents” as the biggest challenge facing the industry, adding: “I’m completely scared of what’s going on.” Many of these tools require extensive access to personal information, including files, calendars, etc., and there is already a wealth of anecdotal evidence of how that can go wrong, such as agents deleting treasured family photos, exposing private APIs, and wiping out production codebases.
Given these risks, “I think the way we think about agents is the same as being very smart fools,” he said.
In the age of AI, the question of whether the cryptographic systems that protect society are secure has loomed large. “Does AI pose a threat to cryptography?” asked panel moderator Paul Kocher, co-author of the SSL/TLS protocol.
Cryptography is “built on the idea that security is fixed in hard mathematical problems,” he said, but “with AI, the question of what we know and what we don’t know becomes uncomfortably vague in some ways.”
Panelists said it remains an open question whether AI will discover ways to break previously undiscovered cryptographic systems. None of the tools have discovered new vulnerabilities yet, and are merely repeating what is described in the available literature.
“This is very exciting and promising, but I have to stress that there is no successful AI cryptography yet,” said Shamir, a computer science professor at Israel’s Weizmann Institute.
As large-scale language models continue to improve, Cynthia Dwark, a computer science professor at Harvard University and one of the inventors of differential privacy and proof-of-work, urged cryptography researchers to share their findings with teams attempting to crack cryptographic systems using AI.
Doing this before publication, perhaps weeks in advance, will help test the point at which AI can independently discover previously unknown weaknesses in cryptographic systems, she said. Panelists mentioned the AI race to address the protein folding problem, which has led to the discovery of new important results.
While AI has not yet been proven to be able to find flaws in existing cryptographic systems, the implications for cybersecurity are already myriad, and “I don’t think we have a clue yet as to what the appropriate threat model is that we should be defending against,” Dwark said.
Beyond AI’s potential impact on cryptography, panelists highlighted the increased risks posed by AI’s ability to rapidly synthesize data from disparate sources and quickly leverage that data.
“We were impressed by LLM’s aggressiveness and ability to really personalize the attack,” such as “finding information about you in order to blackmail you,” Dwork said.
AI tools could also be used for “large-scale traffic analysis” in ways never seen before, and could be used for surveillance applications that impact privacy, he said.
Shamir similarly highlighted the speed with which AI can help attackers automate and operate.
“Spear phishing will become much easier and more extensive. A one-day exploit that was just announced will be available for exploitation within minutes of announcement and before anyone has even downloaded a patch. I can go on and on about the negative impacts that will occur to our security as a result of localizing AI,” he said.
While the attack speed increases, the defense often cannot keep up. According to Song, the average time required to patch a system in a healthcare setting is 500 days.
“This business of taking so long to download patches feels like a very fundamental threat,” said Whitfield Diffie, best known for the Diffie and Hellman key exchange.
What are you waiting for? A majority of the four panelists agreed that, at least for now, AI provides an advantage to attackers.
Contrary to his assessment, Diffie said that while AI makes it cheaper to attack systems and find ways to exploit them, “this is equally available to attackers and defenders,” as long as defenders choose to employ this method to more aggressively lock down infrastructure.
As the impact of AI on cybersecurity continues to become clearer, Shamir told the audience he still thinks there are positives.
“Some agents will attack, some will defend, and some will go to the beach to end this discussion on a positive note,” he said.
