How AI agents can revolutionize SOCs – With human help

AI News


This audio is automatically generated. Please let us know if you have any feedback.

National Harbor, Maryland – Artificial intelligence is poised to transform the work of security operations centers, but experts say humans need to be involved in managing the company's response to cybersecurity incidents at all times.

AI agents can automate many iterative and complex SOC tasks, but foreseeable futures have major limitations, such as replicating their own human knowledge and not being able to understand bespoke network configurations, according to the experts presented here in the Gartner Security and Risk Management Summit.

The AI ​​promises dominated this year's Gartner conference. There, experts shared how technology can make cyber defenders' work easier, even if they have a long way to go before replacing the experienced SOC people.

“As speed, refined [and] The scale of the attack [go] Hammad Rajjoub, director of technical product marketing at Microsoft, can tackle these challenges using Agent AI. He said in his presentation. “What is better to defend at machine speed than AI itself?”

Silent Partner

According to the security experts in the presentation here, AI can already assist SOC staff with several important tasks. Pete Shoard, vice president analyst at Gartner, said AI can help people find information by automating complex search queries, writing code “without the need to learn language” and summarizing incident reports for non-technical executives.

But automating these activities involves risks if it's wrong, Shoard said. The SOC said that the same “robust testing process” applied to human-written code must be used to verify AI writing code, and that employees must review AI summary to prevent “sending nonsense on the chain” to “the decision-making person.”

In the future, AI may even be able to automate intrusion investigations and repairs.

Today, most AI SOC startups focus on using AI to analyze alerts that “reduce the cognitive burden on humans”,” said Anton Chuvakin, senior staff security consultant at the CISO office at Google Cloud. In the distant future, he said, “I still hope to repair the machine and solve certain problems.”

Chuvakin said some IT professionals “get crazy” about the possibility of loosening painstakingly customized computer systems into AI, but they should prepare for that future.

“Imagine a future where there are agents working on your behalf. You can protect and defend even before the environment is able to attack,” Microsoft's Rajjoub said in his Agent AI presentation.

Rajjoub predicted that within six months AI agents could infer their own inferences, and that various tools could be automatically deployed on the network to achieve the specified goals of human operators. Within a year and a half, these agents could improve and modify themselves to pursue those goals, he said. And within two years, the agents predicted that they could change the given instructions to achieve the broader assigned goals.

“It's not two, three, four, five or six years from now,” he said. “We're literally talking about weeks and months.”

Limitations and risks

However, as AI agents take on more tasks, monitoring them becomes more complicated.

“Do you really think employees can keep up with the pace of how agents are built?” said Dennis XU, Gartner's Vice President of Research. “We'll never be able to catch up.”

He proposed a bold solution: “You need to monitor the agent using agents. But that's even more out of time.”

Many analysts have urged caution when deploying AI to SOC. Chuvakin explained tasks in several categories. There are some “plausible but dangerous” and others “fully rejected” he believes AI can achieve in the near future in the mid-term.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *