Anthropic’s AI agent sparks discussion about business risk

AI For Business


As regulators and businesses reevaluate the risks and benefits of automation, Anthropic has expanded its efforts into AI agents and advanced models. The move drew scrutiny from U.S. policymakers and sparked debate among technology and security experts.

The launch of Agent Harness, which connects Claude AI models to tools like Notion and Anthropic’s own API, was an early test of how far organizations should trust software agents to work on their behalf. The product enables non-technical teams to assemble and deploy workflow-style agents that can read information, make decisions, and trigger actions across different systems.

Danny Bluestone, CEO of consulting firm Ducks House, said the shift from AI as an assistant to AI as an agent will change the risk profile of many companies. In his view, these tools lower the barrier to building complex software behavior while increasing the likelihood of failures that are difficult to predict and control.

“Anthropic’s Agent Harness puts the creation of powerful software in non-technical hands, as we demonstrated with Notion and the Claude API. This is transformative, but it also comes with risks.

“Like autopilot in aviation, it works until complexity arises. Without a deep understanding of user experience, system design, and real-world constraints, teams can quickly deploy flawed journeys and unstable infrastructure.

“The benefits are huge, but only for organizations with experienced operators who can properly define requirements, enforce standards, and quickly debug problems as they arise. If this power falls into the wrong hands, it can quickly derail a product,” Bluestone said.

Bluestone has spent over 20 years designing digital products and services for organizations such as the Bank of England, the UK Government and Cancer Research UK. He currently advises executives on how AI will impact customer journeys and operations, arguing that automation typically amplifies rather than corrects existing strengths and weaknesses.

His comments come as Anthropic is receiving intense attention in Washington over another large-scale model called Mythos, with U.S. authorities reportedly assessing the national security and cybersecurity implications. The scrutiny highlighted concerns that advanced AI systems could simultaneously defend networks and enable more sophisticated attacks.

Madhukar Irbasraya, co-founder and managing partner at risk consultancy Oratsen, said this policy debate risks overlooking the widening gulf between organizations that are effectively using AI and those that are not. He described this as a structural shift in how resilience and exposure are distributed across the economy.

“There is no question that U.S. authorities are justified in their alarm. The pace of advances in AI, especially in cybersecurity, is unprecedented. But this story cannot be limited to risks alone. Beneath the headlines, a deeper structural change is underway: the emergence of a two-tier structure of AI ‘haves’ and ‘have-nots’,” Irbasraya said.

“On the one hand, AI-native organizations are operating at dramatically faster speeds, building capabilities that can both protect and destroy. On the other hand, many companies remain constrained by legacy systems and incremental change and struggle to keep up. It’s that widening gap that is the real risk,” he said.

He said AI now underpins cybersecurity strategies for both attackers and defenders, and that decisions about its adoption within companies will be as important as regulations at the national level.

“In this context, AI is not only a source of new threats, but also the most powerful defense enabler available. AI can strengthen security teams, accelerate threat detection, and identify vulnerabilities at a previously unimaginable scale. But its benefits depend on planned and disciplined deployment,” Irbasraya said.

He warned that even as rivals standardize and industrialize the use of AI, ad hoc experiments could put organizations at further risk.

“The key question is not whether companies will deploy AI, but how. Piecemeal and experimental use will only widen the gap. Structured, managed, and observable AI deployments can help close the gap,” he said.

He said companies and regulators should focus on practical governance rather than extreme optimism or caution.

“The path forward should be shaped not by fear or hype, but by responsible enterprise-level adoption, because in a world of AI-driven threats, only AI-enabled and resilient organizations will be able to keep up,” he said.



Source link