
ChatGPT has captured the world’s imagination, but it may also have trapped it. Chatbot interfaces with familiar conversational formats have made AI accessible to millions of people and demonstrated the power of large-scale language models (LLMs) in a natural and engaging package. However, this very success has given rise to the misconception that AI is equivalent to chatbots and that every application needs a chat window to take advantage of AI.
The reality is even more nuanced. ChatGPT was successful not only because of its underlying technology, but also because of the great match between interface and functionality. By packaging AI conversationally, OpenAI has created a product where errors are tolerated and even expected. Users can correct misconceptions, refine prompts, and iterate to get better answers. Chatbots have become the perfect vehicle for technology that is inherently probabilistic and sometimes wrong.
But what works for general exploration doesn’t apply to domain-specific business applications. When companies rush to add chatbots to their products in an attempt to appear AI-advanced, they often create more problems than they solve. This impulse is understandable. Executives want to demonstrate AI adoption, and chatbots seem like the quickest way. Technically, it’s easy to implement. Connect to the API, add a chat interface, and declare victory. However, this approach typically provides minimal value while exponentially magnifying the risk.
Chatbots built into business analytics platforms suddenly need to handle not only data queries, but also random tangents unrelated to the core product. If an integrated LLM can only provide accurate answers 80% of the time, the surface area for error increases exponentially. Information doesn’t arrive at the moment users need it. The interface becomes a distraction rather than an enhancement, satisfying executives but frustrating real users.
User experience revolution
The real opportunity lies in rethinking how AI is integrated into workflows, rather than bolted onto common chat interfaces. Realizing this opportunity requires classic product discipline. This means understanding the job to be done, understanding complex data, and presenting information at the right time and in parallel with relevant actions. AI should improve these experiences, not worsen them. The interaction surface should become narrower and more focused, rather than broader.
Consider the evolution of AI coding assistants. Although LLMs have become commoditized to some extent, winners in this space differentiate themselves with a great user experience. We embed AI directly into developers’ existing workflows, providing real-time suggestions as they type code, allowing developers to guide the AI with simple configuration files, and seamlessly integrating with the tools they know and love. While the chat element is present, it’s not the only interactive mode.
A huge opportunity lies in leveraging existing LLM capabilities and integrating them into domain-specific workflows in a narrowly targeted way, rather than broadly deploying generic chatbots horizontally.
Evolution of agents
The next phase, agent AI, will further increase the need for thoughtful UX design. Agents can reason and use tools on your behalf by breaking down complex tasks into smaller components. Agents explore options, arrange travel reservations, and complete transactions autonomously, escalating to humans only when guidance is needed.
However, agent functionality does not dictate a single interface paradigm. The tools integrated, the information provided, and the interaction modes used vary widely based on domain-specific requirements and user needs. Consider an AI agent designed to help with travel bookings and an agent designed to help businesses with information security. Both leverage generative AI, but travel agents are likely to present information in a very similar way to popular travel websites. Imagine a highly visual interface that prompts you to “Choose from three hotels that meet your price criteria and itinerary.” Infosec agents, like today’s enterprise IT security platforms, have the potential to communicate data-intensive communications about incidents and indicators of compromise. “This is the sev2 security breach incident report.”
Why narrow solutions win
The path to adoption favors narrow, vertically focused AI applications over broad, horizontal platforms. For enterprises, reaping the benefits of AI is primarily a change management challenge rather than a technology challenge. One of the reasons enterprise AI adoption has stumbled is because the technology is probabilistic and sometimes imprecise, rather than the deterministic and accurate technology we’re used to employing. When an AI system is 90% accurate, unlocking value requires careful process design and gradual integration with a human co-pilot. Organizations struggle to redesign workflows across departments, especially when workflows have been optimized for human employees for decades.
The implementation challenge is further exacerbated by the fact that efficiency gains typically result in each employee doing more “thinking” work. Programmers implementing AI tools often comment on how tired they are because the routine work to relax their minds while deep in thought no longer exists (it’s automated). The best ways to implement “human-involved” AI solutions that empower rather than exhaust are still being developed.
Customer Support provides an easy-to-understand example. AI can handle 80% of repetitive queries, but the remaining 20% requires human expertise, making errors more costly. Simply replacing the entire team is not realistic. Without careful UX design to support hybrid human-AI workflows, change management challenges become insurmountable.
Narrow solutions are successful because they are easier to adopt. A focused sales assistant agent has clear users, clear roles, a defined path of human escalation, and measurable influence. Local deployment within specific functions has proven to be much more achievable than top-down, enterprise-wide AI efforts.
Building for a real future
The companies that will win in the next wave of AI applications will not be the ones with the best model or the most parameters. These will build great user experiences tailored to specific domains and workflows. This means:
- Tight integration with existing tools and systems rather than standalone interfaces
- Information and actions presented when and where you need them
- Workflows designed around the probabilistic nature of AI, rather than fighting it
- Domain-specific features that solve real problems rather than generic features
A narrow approach allows you to very quickly establish the user feedback/data flywheel that is essential to creating a more seamless experience and creating opportunities to secure loyalty. This also means building beyond core AI capabilities to handle the middleware, compliance, permissions, security, and pricing models that make expensive AI technology economically viable.
The future of AI is not about chat windows. It’s invisible intelligence that is seamlessly woven into the way people work, making complex tasks simpler and eliminating tedium. That future requires rethinking the user experience from the ground up, rather than retrofitting chatbots onto existing products. The winners will be those who recognize this difference and design accordingly.
As we move toward a multimodal future, the need to rethink and innovate human-computer interaction models will only increase. While most examples of this technology today feel clunky or gimmicky, there is no doubt in my mind that we are on the path to ubiquitous computing, and that the interaction models invented and adopted in the coming years will shape the human experience for decades to come.
