00:00 Speaker A
George, what is your biggest fear about these AI agents that are starting to be rapidly deployed not only in very large public companies, but also in very large private companies?
00:10 george
Well, challenges to some AI agents are like giving full access to a drunk intern. So no one knows what they’re going to do. Therefore, many guardrails need to be installed around the agent. AI and AI agents are changing the world. There’s no question about it, but it needs to be done in a secure manner that minimizes risk to the business and provides some level of compliance. One of the deals we did last year was with a company called Pangea.
00:43 george
This is what really defines AI detection and response, right? So you can understand what the AI is trying to do, you can put guardrails around it, you can make sure it has the appropriate non-human identity. This, again, ties in with acquisitions like Signal, offering companies a way to leverage AI in an audible, compliant and secure manner, minimizing risk to the business.
01:21 Speaker A
George, AI AI data center construction is really booming in this country. I mean, I don’t think a day goes by that a new contract isn’t announced to build an AI data center somewhere in this country. As you build this infrastructure, what are the risks that are starting to accumulate in your infrastructure in terms of potential attacks?
01:54 george
Infrastructure and its associated components have been a risk long before AI. You look at the power grid, you look at the water system, yeah, you look at the wireless system, you look at the satellite. These are all fragile type systems. And I also think that adversaries, you know, China, Russia, etc., are focused on exploiting these weaknesses in the infrastructure. As you know, much of the infrastructure is not owned by the government, but by private companies. You may not be investing enough in security.
02:29 george
So this represents a real risk, and if all your eggs are in the AI basket, you need to make sure your data center and AI are protected. This is important. From a crowd strike and shareholder perspective, the more AI is deployed, the more security is needed. It’s that simple. And if we believe (and I do) that there will be more AI in the next five years, security will become even stronger and we will see an explosion in security to protect all use cases related to AI.
