QCon AI NY 2025 – Become an AI native without falling into architectural amnesia

AI News


At QCon AI NY 2025, Tracy Bannon gave a talk that explored how the rapid adoption of AI agents is reshaping software systems, and why treating all “AI” or “agents” as interchangeable puts organizations at risk of repeating common architectural mistakes.

Bannon argued that much of the current confusion stems from disparate behaviors and risk profiles collapsing under the same label. Bots are described as scripted responders that react to predefined triggers, whereas assistants collaborate with and are primarily under human control. In contrast, agents are goal-driven actors that can make decisions and perform actions throughout the system.

Everyone is talking about the “productivity” of AI. Few talk about the architectural amnesia that comes with it. – Tracy Bannon

To flesh this out, Bannon outlined a set of autonomy patterns that commonly appear throughout the software development lifecycle. These range from AI-assisted tools embedded in existing workflows, to task-level agents that operate within a bounded scope, to multi-agent orchestration that orchestrates end-to-end flow, and ultimately mission-level autonomy where the system plans, optimizes, and adapts toward higher-level goals.

A central theme of the talk was that autonomy does not naturally disappear. Failure occurs when autonomy grows faster than architectural discipline. Bannon explained that this gap creates what she calls “government debt.” She connected agent debt to familiar problem areas such as identity and privilege sprawl, poor segmentation and containment, lack of lineage and observability, and weak validation and safety checks.

Bannon tied this risk to broader industry trends, pointing to research showing that a majority of technology decision makers expect AI-driven complexity to increase the severity of technical debt in the near term. She argued that AI will not introduce fundamentally new failure modes, but will magnify existing failure modes by accelerating change and increasing the scope for error.

Her focus was on applying established architectural principles to agent systems. He argued that organizations already know how to manage risk in distributed systems, but often forget these lessons under pressure to respond more quickly. Governance in this context was presented as the minimum set of controls necessary to build trust, including clear accountability and traceability of actions and data flows.

Identity was emphasized as the fundamental control on which other safety measures depend. Bannon said every agent must have a unique, revocable ID, and organizations must be able to quickly answer basic questions when something goes wrong, such as what the agent has access to, what actions they've taken, and how it can be stopped. She described a minimal identity pattern that consists of agent registries.



As we chase visible indicators of activity, we silently starve out the design, refactoring, validation, threat modeling, and other work that goes into keeping a system healthy. – Tracy Bannon

Decision-making discipline was also a recurring theme. Bannon encouraged teams to start with the “why” instead of the “how,” and to be clear about tradeoffs before increasing autonomy. She explained that decision-making is an optimization that always improves one aspect at the expense of another, such as value and effort, speed and quality.

The talk ended with a call for architects and senior engineers to take an active role in shaping how AI agents are deployed. Bannon framed this responsibility as preventing architectural amnesia by designing managed agents rather than ad hoc automation, making risks and liabilities visible, and pursuing higher levels of autonomy only when they clearly deliver value. Her final message was that the core practices of software architecture are still valid, and the challenge is not to learn a completely new discipline.

Developers interested in learning more can explore additional QCon AI sessions and InfoQ coverage, and a video recording of the conference will be available starting January 15, 2026.





Source link