SAN FRANCISCO — Given the sheer number of companies touting “AI” in 2024, it's no wonder the term itself has become something of a buzzword.
Dan Schiappa, Chief Product and Services Officer Managed Detection and Response (MDR) The vendor, Arctic Wolf, is one of thousands of security personnel tasked with blocking out the noise.in RSA Conference 2024 This month, the majority of vendor booths on the show floor were touting the power of AI in their companies' products. Not all of these vendors necessarily use AI. A.I. Even as a terminology, means can vary widely.
Schiappa, who joined Arctic Wolf in 2022, spoke with TechTarget at the conference to discuss this challenge and how Arctic Wolf is leveraging AI.He also mentioned the company New risk assessment toolArctic Wolf Cyber Resilience Assessment published as part of RSAC.
What prompted the need for a new risk assessment tool?
Dan Schippa: One of the key things that we're seeing is that transferring risk is a key element. [cyber] insurance. One of the things we learned is that the entire security market is in a very early stage. It's like buying life insurance, [insurer asking]”Do you smoke? Do you go cage diving with sharks? OK, then this is your insurance.” So we want you to understand the real cyber risk environment and share that information with us. We use this tool to build a bridge to increase your chances of insurance coverage.
One of the things we talked about in the concierge delivery model is the concept of the security journey. Our concierge team is your interface.they do what we call Detailed review of your security posture SPIDR helps our customers ensure they have good security hygiene and that everything is running properly, and that is reflected in this tool. We look at third-party tools, we make sure they're using the right configurations, we make sure they're not running against the NIST framework. We understand our customers' overall risk posture, and we now have interfaces with insurance companies so they can provide that information to them so they can get discounts on their policies.
As someone involved in management [being in the MDR space] –Even if you're not managing it yourself, is your job not to fully manage your client's security operations, but to take them on a journey to control their own security posture?
Skipper: It's a mutual responsibility model. Yes, absolutely. We need to help our customers set up their ecosystem properly, even if they have no security expertise. Let me explain about it. Some of our customers are very sophisticated, so we do more mutual management. And some people say, “I don't know what I'm doing.” This is a water equipment company. Just do it for me. ” But we still have a mutual responsibility model of helping set up and manage your security efforts.
What else are you most concerned about right now when it comes to security?
Schiappa: I think the AI aspect is big and important for us. Our industry has long been an early adopter of AI, and we've been using it for years to do all kinds of things like anomaly detection and malware detection. Our ability to deliver AI has improved significantly today, but my biggest concern as far as this industry is concerned is: .ai A domain name alone doesn't make you an AI company.
People are saying, “AI will increase the multiplier.'' And instead of finding a problem and understanding what type of AI can help solve the problem, they are adopting AI technology and trying to find a reason to use it. Generative AI is great. That's a good thing in some ways. That's not good for everything. There are many different models, including deep learning, neural networking, casual ML, and Bayesian models, as well as different types of models that can be created to solve different problems. I'm seeing this kind of AI genericism around his GenAI and LLMs, which indicates a fairly weak sophistication in how people deploy AI. And that looks more like a tabloid headline-type AI deployment than building AI to solve the right problem.
I think this is difficult for you as a CPO who is involved in the messaging process because the definition of AI has become somewhat meaningless. As someone who is in a position to decide the positioning of a product, and as someone who is involved in AI, what do you think about this?
Dan SchipperArctic Wolf, Chief Product and Services Officer
Skipper: Yeah, that's a concern. Because in the true definition of AI, it means sentience. And we are not there.I think A.I. Machine learning has become an umbrella term for anything that uses mathematical analysis to solve problems. And I think there are elements of machine learning that are very powerful and very capable that are not AI, and that get lumped together under the AI brand. There's simple security analysis that you're doing, there's some actual fundamental stuff that has a big impact on the business and gets lumped together as AI. It's a very loosely applied term.
And I think that's where people end up claiming AI capabilities without actually having them. The area that has truly brought AI to the forefront is GenAI. This is ChatGPT stuff that people can actually touch, feel, and interface with. Much of AI and cyber in the past has always happened behind the scenes. you didn't see it. I could see the results, but I couldn't actually manipulate them. So I think people get so caught up in the former part that they miss out on some of the really great innovations that can still be done behind the scenes.
As a product person, how have you tried to cut through that noise?
Schiappa: We talk a lot about what we're doing with AI and what problems we're solving. I think that's important. In other words, what problem are you solving? Let's break it down into several categories. One is to reduce the amount of data. We perform approximately 55 trillion safety observations every week. It is impossible for humans to process it. How do we do that to make our workloads more manageable? Our typical customer receives about one alert per day. For comparison, you can assume that your SIEM is probably receiving hundreds to thousands of alerts per day. Filter all your data to focus on what's important and let AI do it for you.
The second part is how to correlate similar data. How do we know that the signal from the endpoint is related to this signal on the network, which is from the cloud? Bringing these together and providing them to the security analyst is context. When making security decisions, gain context for a holistic view or take that correlated data and feed it to another AI model to perform detection and remediation across multiple attack surfaces. The last part is the interface. Approximately 40% of SOC tickets are customer questions. How can you create a chatbot that filters that out and reduces the effort of security operators, allowing them to focus on actual, thorough security work?
So how can you create something like natural language as a query interface for security operators and customers without having to know SQL or schema? Simply say, “Show me all servers that have talked to this IP address in the last 3 minutes” and you'll get your answer.
Editor's note: This interview has been edited for clarity and length.
Alexander Culafi is a senior information security news writer and podcast host for TechTarget Editorial.