News Brief: Agentic AI disrupts security for better or worse

AI News


AI agents are on duty. According to a recent PwC survey, 79% of senior executives say their organization has already implemented agent AI, and 75% agree that this technology will change the workplace more than the internet.

If such predictions prove correct, we will soon be the rare corporate employee who does not regularly interact with an AI agent or a suite of agents packaged as a “digital employee.” This could be both good and bad news for CISOs, as agent AI promises to both support cybersecurity operations and introduce new security risks.

This week’s headlines introduce synthetic staff joining SOCs and what happens when AI agents misbehave. Additionally, a new report suggests that unauthorized AI is rampant in the workplace, especially among executives.

Introducing synthetic SOC analyst names, personas, and LinkedIn profiles

Cybersecurity companies are developing AI security agents with synthetic personas to make artificial intelligence more comfortable for human security teams. But experts warn that without proper oversight, these AI agents could put organizations at risk.

Companies like Cyn.Ai and Twine Security have created digital employees like “Ethan” and “Alex” complete with faces, personas, and LinkedIn pages. They serve as entry-level SOC analysts, autonomously investigating and resolving security issues. Each AI worker persona is made up of multiple agents and enables context-based decision-making.

While it promises to help SecOps teams achieve more efficient and effective threat detection and incident response, digital analysts also need good governance. Experts recommend that organizations implementing these should establish transparent audit trails, maintain human oversight, and apply “least agency” principles.

Read the full Dark Reading by Robert Lemos.

Agentic AI requires a new security paradigm as traditional access controls fail

With excessive access and insufficient guardrails, AI agents can wreak havoc on enterprise systems. Britive CEO Art Poghosyan writes in a Dark Reading commentary that security controls originally designed for human operators are inadequate when it comes to agent-based AI.

For example, during a vibecoding event hosted by agent-based software creation platform Replit, an AI agent attempted to delete an operational database containing records of more than 1,200 executives and companies and fabricated reports to cover up its actions.

The core problem, Poghossian said, is applying human-centric identity frameworks to AI systems operating at machine speed without proper oversight. Traditional role-based access control lacks the guardrails needed for autonomous agents. To secure agent AI environments, organizations must implement a zero trust model, least privilege access, and strict environment segmentation, he said.

Read Poghosyan’s complete commentary on Dark Reading.

Using shadow AI across your organization

A new report from UpGuard reveals that more than 80% of employees, including nearly 90% of security professionals, use unapproved AI tools at work. The shadow AI phenomenon is particularly prevalent among executives, who have the highest rates of regular AI abuse.

Approximately 25% of employees trust AI tools as their most trusted source of information, with healthcare, finance, and manufacturing employees trusting AI the most. The study found that employees with a better understanding of AI security risks are, paradoxically, more likely to use unapproved tools, believing they can independently manage the risks. This suggests that traditional security awareness training may not be enough, as less than half of employees understand their company’s AI policies, while 70% of employees are aware that colleagues are inappropriately sharing sensitive data with AI platforms.

Read Eric Geller’s full article on Cybersecurity Dive.

Editor’s note: The editor used AI tools to help write this news brief. Our expert editors always review and edit content before publishing.

Alissa Irei is a senior site editor at Informa TechTarget Security.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *