This audio is automatically generated. Please let us know if you have any feedback.
National Harbor, Maryland – Artificial intelligence is poised to transform the work of security operations centers, but experts say humans need to be involved in managing the company's response to cybersecurity incidents at all times.
AI agents can automate many iterative and complex SOC tasks, but foreseeable futures have major limitations, such as replicating their own human knowledge and not being able to understand bespoke network configurations, according to the experts presented here in the Gartner Security and Risk Management Summit.
The AI promises dominated this year's Gartner conference. There, experts shared how technology can make cyber defenders' work easier, even if they have a long way to go before replacing the experienced SOC people.
“As speed, refined [and] The scale of the attack [go] Hammad Rajjoub, director of technical product marketing at Microsoft, can tackle these challenges using Agent AI. He said in his presentation. “What is better to defend at machine speed than AI itself?”
Silent Partner
According to the security experts in the presentation here, AI can already assist SOC staff with several important tasks. Pete Shoard, vice president analyst at Gartner, said AI can help people find information by automating complex search queries, writing code “without the need to learn language” and summarizing incident reports for non-technical executives.
But automating these activities involves risks if it's wrong, Shoard said. The SOC said that the same “robust testing process” applied to human-written code must be used to verify AI writing code, and that employees must review AI summary to prevent “sending nonsense on the chain” to “the decision-making person.”
In the future, AI may even be able to automate intrusion investigations and repairs.
Today, most AI SOC startups focus on using AI to analyze alerts that “reduce the cognitive burden on humans”,” said Anton Chuvakin, senior staff security consultant at the CISO office at Google Cloud. In the distant future, he said, “I still hope to repair the machine and solve certain problems.”
Chuvakin said some IT professionals “get crazy” about the possibility of loosening painstakingly customized computer systems into AI, but they should prepare for that future.
“Imagine a future where there are agents working on your behalf. You can protect and defend even before the environment is able to attack,” Microsoft's Rajjoub said in his Agent AI presentation.
Rajjoub predicted that within six months AI agents could infer their own inferences, and that various tools could be automatically deployed on the network to achieve the specified goals of human operators. Within a year and a half, these agents could improve and modify themselves to pursue those goals, he said. And within two years, the agents predicted that they could change the given instructions to achieve the broader assigned goals.
“It's not two, three, four, five or six years from now,” he said. “We're literally talking about weeks and months.”
Limitations and risks
However, as AI agents take on more tasks, monitoring them becomes more complicated.
“Do you really think employees can keep up with the pace of how agents are built?” said Dennis XU, Gartner's Vice President of Research. “We'll never be able to catch up.”
He proposed a bold solution: “You need to monitor the agent using agents. But that's even more out of time.”
Many analysts have urged caution when deploying AI to SOC. Chuvakin explained tasks in several categories. There are some “plausible but dangerous” and others “fully rejected” he believes AI can achieve in the near future in the mid-term.
In the dangerous category, Chubakin listed autonomous tasks such as patching legacy systems, responding to intrusions, and proofing regulatory compliance. “I've seen people fill out using consumer grade chatgpt [out] Compliance surveys,” he said.
Some of the tasks Chuvakin said he cannot imagine AI achieving immediately include strategic risk analysis, crisis communication, and hunting threats against top-class enemies of nation-states. Fighting against advanced hacker groups “is a human job,” he said.
Gartner's Shoard says that using AI to create tabletop exercises can cause staff to warn about threats that are overly dependent on AI and evolved, and using AI to create threat detection queries can reduce employee research skills. “You'll be an underdeveloped staff,” he said. “Staff are overly dependent on things like AI”
Maintaining “tribal knowledge”
He said that human judgment is an important part of analyzing and responding to security incidents, so AI will never replace humans with SOCs.
“A lot of what we do in actual SOCs involves what tribal knowledge is,” Chubakin said. AI has a hard time performing these activities. Chuvakin said it recommends many models that recommend actions that are meaningless for the particular network in which many models are operating. In particular, AI cannot create threat detection rules tailored to highly customized legacy IT environments.
Chuvakin has urged the startup receiver “Ai-Soc-in-a-box” companies to “question them how this magic will turn out.” [address] What is in a human head. ”
AI can also enhance the skills and capabilities of SOC analysts. Shoard called it a “large power multiplier” for the SOC workforce, but he warned not to rely too much on businesses.
“If you think you can plunder SOC staff just because you suddenly bought AI features, then you'll be a solid disappointment,” Schord said. “AI won't replace your security staff, so we'll use it to enhance them [and] Make their work better. ”
With AI, you need to trust it
In future SOCs, humans will not only work with AI agents, experts said. You also need to monitor these agents.
“We don't want full autonomy,” said Tiasiso Upendra Mardical. “We have to have humans in the loop.”
These humans need to ensure that the AI agents' actions are auditable and controlled by company policies, experts said. Jose Veitia, Director of Information Security at Red Ventures, said companies need to “ensure that all actions are verified.”
To design an automated system, you need to supply the right data. “If the machine simply allows the decision to be made for us,” Gartner's Scholl said, “We must trust that there is all the relevant information to make that decision effectively.”
Trust and verification have been a general theme in the AI discussion throughout this week's Gartner conference.
“The trust must be the fabric that these agents are built on,” Rajjob said. “The more general and competent the agents become, the more important their security will be to all of us.”
However, as AI agents become more capable, their value in SOC could increase significantly.
“Unfortunately, AI is not magic. I don't think it will,” Scholl said. “But it will improve things for us at SOC. You need to consider it with great care, but you will use it experimentally.”
