How AI adoption is driving investment in cybersecurity fundamentals: Blackwood executive

AI News


Solution providers recognize that the need to protect AI “often leads to issues such as: [investing in] Good cyber hygiene,” Blackwood President Ryan Morris told CRN.


In the race to enable the secure deployment of AI and agent technology, many organizations are realizing that the first step is not necessarily to increase their use of AI, according to top executives at solution provider Blackwood.

Rather, the urgency to securely deploy AI is often refocusing on long-standing cybersecurity fundamentals such as endpoint visibility, identity security, and data protection, executives told CRN.

[Related: 10 Key AI Security Controls For 2026]

“This goes back to the core areas of security that have been around for a long time: endpoint security for visibility and enforcement, identity for privileges and permissions,” said Ryan Morris, president of Annapolis, Md.-based Blackwood, No. 93 on CRN’s 2025 Solution Provider 500. [investing in] Achieve good cyber hygiene. ”

Blackwood executives said the need to focus on security fundamentals will become more acute as AI agents and applications become increasingly connected to organizations’ IT systems and repositories of sensitive data.

But Blackwood CTO Chris Every says the problem is exacerbated by the growing difficulty of visualizing AI and agent behavior.

Avery said the use of AI is rapidly expanding and organizations are now looking to control their tools through a variety of means, including monitoring browser-based AI usage and using Model Context Protocol (MCP) proxies to understand how agents are connecting to the tools.

Still, he said developers and users are often rapidly adopting new approaches, such as connecting directly to a command-line interface (CLI).

“If your only point of presence is the MCP proxy and everyone decides to connect their agents directly to the CLI, you have zero visibility,” Ebley says. “So it’s like a game of whack-a-mole.” [where] Maintaining visibility requires constant evolution of your security architecture. ”

Therefore, the challenge for security teams is not only to understand what AI apps and agents are connecting to, but also to understand what data the AI ​​tools have access to and what identities and permissions they are using, he said.

According to Avery, this is a key reason why discussions about AI security often turn into discussions about identity security, which focuses on the need to protect the non-human identities utilized by agents.

“The reason identity is so important is because AI is just a much more secure version of the service accounts that we’ve maintained for years,” he said. “You’re literally authorizing an account or application to run on your behalf. In some cases, [it’s] They may impersonate you, or they may essentially have their own access. ”

The big impact comes from the fact that traditional identity security often focuses on users and accounts, while AI tools use those privileges in a different way, Every said.

In traditional identity security, “context is just the concept of the user or account, not the application that things are being leveraged into or any other applications that might exist alongside it,” he said.

Avery said this could create something similar to an “octopus scenario” where an AI application is connected to multiple different tools with varying levels of access.

Ultimately, he said, this is why having strong visibility is so important to effectively protect the use of AI and agent tools.

If organizations can understand everything that their AI tools are connected to, “they can see the prompts that are in this. They can understand what data they’re accessing and the level of privilege that’s there, and how those identities are actually handled,” Every says. “You have to be able to understand all of this. You can’t just have a piece of it and try to solve the AI ​​security problem. It’s Sisyphean, to say the least.”



Source link