It’s time for governments and business leaders to develop AI cybersecurity regulations — Harvard Gazette

AI For Business

As new agent AI models continue to come online, cybersecurity experts are praising their ability to quickly and autonomously sift through vast amounts of data, making them great tools to help fight cybercrime.

However, they warn that these attributes can also be exploited by bad actors to hack into systems and compromise personal data, the economy, and national security.

A group of cybersecurity experts recently gathered for a discussion at the Berkman Klein Center for Internet & Security, during which they all agreed that it’s time for business and government leaders to regulate technology before it’s too late.

Recent data from IBM shows that cybercrime is rapidly increasing. In its 2026 study, the company found that cyberattacks targeting consumer software and system applications, many of which leveraged AI, increased by 44% compared to the previous year.

High-profile attacks include a November data breach against Anthropic, the AI ​​company that developed the Claude Code assistant. Using their own AI models, the attackers were able to scan the source code for weaknesses and expose its inner workings.

“Unfortunately, the bad guys kind of only have to win once, whereas the defenders have to win all the time,” says James Mickens, the Gordon McKay Professor of Computer Science. “For me at least, this is a concerning aspect of what it means to think about cyber security, attack and defense with agents.”

Additionally, cybercriminals have made incredible advances in phishing attacks in recent months by using AI to fine-tune their targets and craft their messages.

“A year ago, we still had misspelled email messages in our inboxes that were not colloquial English and were easy to spot if we were careful. Now, all those signs are gone.”

robert knake
Robert Knake.

“A year ago, we still had email messages in our inboxes that contained misspellings that were not colloquial English and were easy to spot if we were vigilant. Now, all those signs are gone,” said Robert Kneke, a panelist and partner at Paladin Capital, a cyber venture capital group.

Knake also served as acting National Cyber ​​Director, responsible for strategy and budget in the newly created Office of the National Cyber ​​Director in the White House from 2022 to 2023.

In Kneke’s view, the federal government needs to start requiring the private sector to do more to prevent attacks that endanger consumer and national security.

“We are not at a stage where we can say that if there is an error in the software that causes harm, we have to be held accountable. That would undermine software development,” he said. “But if you’re doing these basic things, like using the latest and known safe versions of open source packages, you can create a safe harbor that says you shouldn’t be held responsible for bad outcomes from your software. If you’re not doing those things, you should be.”

According to Mickens, this type of regulatory scheme may be easier said than done, especially as the cybersecurity landscape continues to change.

For decades, he said, tech companies like Microsoft and Amazon have been building stop-gap measures into their code to prevent traditional internal security breaches, without formal government regulation.

“The big difference with AI is that the threat model changes,” Mickens says. “Essentially, you have a human being sitting in a chair outside the data center, sending evil commands to the code running inside the data center, and trying to use AI to make the code evil.”

He added that when discussing mandating security measures against external forces or AI, there is a need to clearly define the responsibilities at issue and the types of hardware and software to ensure compliance.

Josephine Wolfe, associate dean for research and professor of cybersecurity policy at Tufts University’s Fletcher School, added that regulation can be particularly difficult when the private sector is required to actively find vulnerabilities across large networks.

“Documentation and inventory management are both very important, but very difficult,” she said. “Can you create an inventory of all the code running on your computer so that if there is a vulnerability or something goes wrong, at least you know where to look?”

However, all panelists agree that companies should not be held responsible for retaliating against hackers, although the liability portion after an online system is compromised remains vague. One school of thought in the fight against cybercrime argues that hacked companies may be in a unique position to “get hacked back.”

“I think the more attackers that get into other people’s networks in the name of self-defense, the less likely they are to escalate something,” Wolff said. “The idea of ​​bringing in the private sector and making sure it doesn’t cause further disruption seems hopelessly optimistic to me.”

Additionally, he added, it’s unlikely that large companies like Google or Microsoft would use sophisticated surgical attacks to take down small clusters of servers that launch denial-of-service attacks.

“I think there’s going to be a lot fewer lawyers and a bunch of weirder firms that feel like this is their chance to fight North Korea. And that doesn’t seem like a safer world to me.”

Mickens envisions a world where companies run unmanned agent firewalls, leaving retaliation to the private sector.

“Detecting intrusions, tracking hackers all the way to London, Berlin, and then doing something offensive. I think the world will quickly degenerate into what is essentially high-frequency transactions, except for cybersecurity today, where you have a bunch of algorithms going back and forth and just reacting to each other in very real time,” he said. “I think we don’t want to go into that world for the same reasons we don’t want to represent vigilantes in the physical world in general.”

When it comes to combating AI-powered phishing scams, panelists envision a world where real human identities can be verified online, although they are currently similarly opaque.

“This has been an issue in the ecosystem for 30 years,” Kneik said. “I think the threat of AI means we need to make sure we know who the other person is, and if they claim to be a real human, we need to make sure we know that they are a real human, so we can trust who they are.”

Mickens added that while digital identification could become a viable option to combat cybercrime in the future, it could run into some obstacles due to the way consumers use the internet.

“One of the reasons digital ID has traditionally struggled is that there are many scenarios where someone wants to be identified as part of their identity, but not their full identity,” he said. “For example, if I’m a victim of domestic violence or a runaway child, I might want someone to know that I’m a person, but I don’t actually want my real name to be known. I want my words to be consistently associated with a particular pseudonym, but I don’t want it to be my real name. For some of these suggestions to become reality, we’re going to need to solve these kinds of practical problems.”

Collectively, technology companies and government agencies face constant changes in AI capabilities. With change comes both challenges and opportunities to leverage technology.

“We have the ability for an agent AI to basically sit on your shoulder, on your phone, on your computer, look at everything you’re doing and say, this certainly looks like the kill chain of a fraud scheme,” Kneik said. “We can do it. We just need to find the right market participants to make that investment and build that technology.”



Source link