RSA CONFERENCE 2023 – SAN FRANCISCO – As businesses and government agencies weave artificial intelligence (AI) and machine learning (ML) into their broader systems, they must consider a range of risk resilience issues that begin with cybersecurity concerns and extend well beyond. there is.
A panel at RSA Conference 2023 on April 24th, composed of prominent AI and security researchers, will discuss AI resilience, including critical issues such as adversarial AI attacks, AI bias, and the ethical application of AI modeling. investigated the problem areas of
Cybersecurity professionals need to start addressing these issues within their organizations and as partners with governments and industry groups, panelists said.
“Many organizations try to integrate AI/ML capabilities into their core business functions, but in doing so, they increase their attack surface,” says Bryan, panel moderator and assistant director of the FBI Cyber Division. Vorndran explains. “Attacks can occur at any stage of the AI and ML development and deployment cycle, and can target models, training data, and APIs.”
The good news is that if the community starts working now, there will be time to ramp up these efforts.
“We have a really unique opportunity here as a community,” he said. Neil Serebryany, CEO of CalypsoAI“We are aware of the fact that there is a threat, we have seen early incidents of this threat and it is not yet in full swing.”
“Yet” is a valid word, he stresses, and his fellow panelists agree. The field of risk management is in a place with AI similar to where cybersecurity was with the internet in the 1980s, said Bob Lawton, head of US mission functions. Office of Director of National Information Science and Technology Group.
“If it were 1985 and we knew the challenges we would face in the cyber domain today, imagine what we would have done differently, as a community, as an industry, than we did 35 years ago. Try it, and that’s exactly where we’re using AI right now,” Lawton said. “We have the time and space to get it right.”
Especially when it comes to direct adversary attacks on AI systems, the threat is still very rudimentary, as attackers are currently only doing the work necessary to achieve their goals. Yes, says Christina Liaghati, AI Strategy Executioner. Operations manager at MITER Corporation.
“I think we’re going to see a lot more malicious actors that have a higher level of sophistication for these attacks, but it doesn’t have to be now. I think that’s what’s really interesting about this area,” she said. Told. audience.
Nevertheless, she cautioned that the organization cannot downplay the risks. Threat actor interest in increasing the sophistication and knowledge of AI models is only growing as her AI models are embedded in systems they can attack and profit from. This is true even for smaller organizations using simple ML models in their financial system. It’s also true when government agencies use simple ML models in their intelligence capabilities.
everyone is at risk
“A system is vulnerable if you’re deploying AI in an environment where actors can abuse, evade, or attack that system,” she said. “So it’s not just the hyper-tech giants or someone who’s deploying AI at scale. And you may be at risk, perhaps in new ways you haven’t thought of or aren’t necessarily ready for.”
The challenge of AI for many cybersecurity executives is that addressing these risks requires them and their teams to acquire a whole new set of knowledge and terminology around AI and data science.
“I think the core of AI assurance is not a traditional information security issue,” said Serebryany. “This is a machine learning problem and information security he is trying to find a way to translate into the community.”
For example, to enhance a model, he says, you need to understand key data science metrics like recall accuracy and F1 score.
“Thus, how to take and study these underlying ML concepts and translate terminology, concepts, and standard operating procedures within a soft context that makes sense within the information security community. I think it’s our duty to understand,” he said.
At the same time, Liagati said AI/ML models and systems are deployed in the context of other systems where security teams have decades of experience managing risk, so security fundamentals should not be underestimated. said. Data security, application security, and network security principles are still very important, as are standard risk management and his OpSec best practices.
“A lot of them are just good practice. It’s not just a large, fancy adversarial layer or that you can patch your dataset. It’s not necessarily that complicated,” she says. Many of the ways that can be mitigated are simply a matter of how much information you publish within the public domain about what models you use, what data you use, and where they come from. [and] What the wider system context around that AI system looks like.