00:00 Speaker A
The last story of the day, Anthropic. Yesterday or the day before, we were talking about Project Glasswing, and that’s where their Claude model is so powerful. Explore all existing code, including operating systems and large, world-class enterprise systems. We’re finding all kinds of bugs. So they didn’t publish the Claude myth. I don’t know if it’s pronounced mitos.
00:27 Speaker B
I think it’s Mithos.
00:27 Speaker A
Okay. They didn’t release it to the public. Because this bug was very good at finding these bugs. And you know, there’s a big black market for things like zero-day vulnerabilities. So tell me what are we learning here?
00:41 Speaker B
Yeah, I mean, this is kind of a scary survey of where we’re going with AI. And basically, as you said, this myth was able to catch a bug that was 26 years old.
00:58 Speaker A
Everyone was lonely, but now suddenly they
01:00 Speaker B
Software that has been around for 26 years. And we don’t just release software and say, “Oh, that’s it.” People are constantly trying to find bugs and patch them, and despite all of that over the years, this one was able to pass and say, “Oh, I found it.” And we were able to find other exploits across every operating system and every web browser out there. This was not trained for cyberattacks. This was a general purpose model.
01:29 Speaker A
It just appeared.
01:29 Speaker B
Yes, they just said, “No, that’s not good.” So they’re just releasing it to their partners and saying, “Here, you guys can use this to find bugs and fix them before other big models like this come out.” And while a stripped-down version may eventually come to market, I was talking to experts even before Mythos was announced, and their conclusion was that there are a number of problems that AI poses for cybersecurity. Mythos has the ability to simply scan for vulnerabilities and implement them much faster than hackers, as we’ve seen here.
02:08 Speaker B
On the other hand, everyone is moving so much faster, which means more vulnerabilities.
02:16 Speaker A
yes.
02:16 Speaker B
However, it’s not just ordinary people. Attackers are also moving much faster. Well, there was one project, uh, there was a project Light LLM. Oh, and it ended up being part of the supply chain attack against Melkor. The reason it was found was by security researchers, uh, when they accidentally downloaded this malware, their computer crashed, and they looked at it and said, oh, this is coded. So someone created a piece of malware that looked like this:
02:49 Speaker A
It was vibe coded and had additional errors inside. Okay, that’s great.
02:54 Speaker B
And that’s where they caught it. Yeah. And consider this, the malware already exists. that’s right. But now it’s like everyone’s drinking too much Red Bull or something, or too much Celsior. Brian Sozzi would be faster now and hit faster. So Anthropic, Open AI, Microsoft are big players in this space, of course, so it’s a question of whether those big companies, CrowdStrike, Palo Alto Network, can get past this and incorporate AI into their products as part of their defense mechanisms.
03:42 Speaker B
I don’t think this will ever be able to be outsourced to software. You also need stakeholders to be able to say, “Okay, we need to do this, we need to monitor this.”
03:51 Speaker A
And that’s already hurting Palo Alto stock and many other stocks in the cybersecurity space.
