When a company employee loads an internal portal to create an IT ticket, they are greeted by “Vikram.” Alternatively, the new employee guide on the HR portal is introduced as “Jenny”. The name virtual AI agent is not new. Apple's Siri has been around for nearly 16 years, and Amazon's Alexa has been around for 11 years. But modern enterprise agents have evolved further, existing as human-like colleagues with personalities, voices, and even avatars.
As UC/CCaaS and enterprise software vendors embrace these fleshed-out agents, there is a growing debate about whether human-like personas improve adoption and trust, or blur accountability and user expectations.
Why are humanized AI agents used?
As adoption of AI agents becomes more widespread (Grand View Research predicts the global market will reach $10.9 billion this year), it is increasingly likely that non-technical users will encounter them.
For these users, a more realistic agent would lower the intimidation barrier and make the AI agent feel more like another teammate than a tool or a bot. In a study published last year, Anthropomorphic chatbots improve purchasing decisions If the customer engaged with a chatbot. Research supported by NVIDIA found: Realistic avatars with appropriate emotional expressions It can increase user engagement compared to text-only interactions.
88% of senior executives and Plans to increase AI-related budget According to PwC, over the next year vendors will need to quickly differentiate themselves in an increasingly crowded market. The popularity of these assistants in workplace applications is also driven by the increasing adoption of consumer digital voice assistants such as Siri, Alexa, and Microsoft's Copilot. Use of these assistants is predicted to be a hit 157.9 million users in the U.S. this year, according to eMarketer data.
Will humanized AI agents drive adoption?
Some in the field argue that giving AI agents a more human-like persona will increase adoption by users. Proponents say the goal is understandability, not familiarity. Denis Chernilevskiy, co-founder and CEO of AI Voice Reception Specialists, says humans rely on language patterns and emotional cues to convey intent, so limited human-likeness can improve task success rates. Aiforia.
“Humans naturally use rhythm, cadence, and emotional tone to convey their intentions. By mirroring these patterns, AI systems can achieve better understanding and smoother conversation flow,” Chernilevsky said. In Aiphoria's experience designing large-scale voice agent frameworks, he says, measured human-likeness improves results without encouraging users to assume feelings or judgments on the part of the agent.
From a marketing and adoption perspective, humanizing AI agents can lower the barrier to initial use. Karina Timchenko, founder of a digital marketing company brandistAccording to , vendors often see higher early adopter engagement with agents that have a name, voice, and personality because users approach these agents as helpers rather than tools that require training.
However, that design choice can also create a gap in expectations. When an AI agent sounds like a human, users tend to assume that the agent will reason and make decisions like a human, and react more strongly when it doesn't.
“Users lose trust in a humanized agent much faster than in a neutral agent. Any sense of disappointment is seen as a personal failure on the part of the AI agent,” Timchenko said.
Familiarity isn't just a result of a vendor's design choices. Users themselves often anthropomorphize AI systems, even when not explicitly encouraged by the agent, says CEO Nick Santillo. fractle.
According to Fractl research, more than 20% of generative AI users have already started naming their chatbots, Santillo said. This suggests that users are actively seeking relationship cues from agents, even when they are not explicitly provided. Giving agents a name, voice or light personality can foster camaraderie and increase engagement, he said. This is especially true for roles such as HR support and coaching, where human-agent interactions are continuous rather than transactional.
It's difficult to quantify whether that engagement is maintained and whether users understand the limits of what these systems can actually do. Research provides insight into what vendors are looking at. For example, anthropomorphism may increase initial trust in an AI agent, but it can also lead to overtrust, as users assume judgment, discretion, and memory in an agent that is not present.
When a bot sounds like your colleague
There are also some differences between customer context and enterprise context. While employees may be more willing to ask “Sarah” than “HR Bot 2.0,” anthropomorphizing these agents can also lead to confusion and over-reliance.
Even if AI agent personas offer some benefits, they can also introduce operational risks. For example, a person's name can connote authority, accountability, and emotional intelligence, which can create friction if users make incorrect inferences about an agent's authority or abilities. Friction can also arise when the escalation path from agent to human is unclear.
Transparency is also important when using anthropomorphic AI agents in the workplace. Organizations must decide how to inform users about AI agents and ensure that employees understand how to appropriately engage with them, including what assumptions to make regarding confidentiality, authority, and discretion. These considerations are also essential in customer-facing scenarios where different levels of understanding and agency must be assumed.
Humanize AI without crossing the line
As options for humanizing AI agents become more sophisticated and adoption of these agents expands, best practices and alternative approaches are emerging. For some, the question is not whether to humanize AI agents, but how to do so responsibly. For Aiphoria’s Chernilevsky, that journey starts with transparency and rigorous testing.
“Guiding principles should be transparency, fairness and clarity. Users need to be aware that they are talking to an AI, and they need to feel equally understood, no matter where they are or how they speak,” Chernilevsky said.
If vendors choose to humanize their agents, they need to be clear about the limitations, Timchenko says. In her experience, positioning AI agents as supportive rather than authoritative, and clearly communicating what they can and cannot do, helps prevent disappointment and loss of trust.
“Vendors must provide transparent communication regarding the capabilities and limitations of their AI agents,” she said.
Humanizing features that increase engagement with AI agents can also raise unrealistic expectations for users, Santillo says. In a Fractl study, 40% of users said naming a chatbot made them feel like a friend, and nearly a third reported having extended conversations that went beyond task completion.
“Humanization improves trust and engagement in the short term, but without clear boundaries and transparency, the lines can blur and make false assumptions about what AI can understand and do,” Santillo warned.
For example, some vendors are starting to ask tough questions about human-coded AI agents, and best practices are emerging between different approaches. Some companies use role-based naming (for example, Your AI HR Agent) or explicitly name the technology's use of AI. These agents often have well-defined scopes of work, limited personalities and boundaries, and less emphasis is placed on specific individuals with human qualities. These practices make it easier to transfer agent work to humans when necessary.
This is not just a UX issue, but also a procurement and change management issue. Choosing a name for an AI agent is not just aesthetic or superficial. Instead, the name given to an agent influences how the underlying technology is used, understood, and trusted. Secondary influences such as role expectations, biases, and peer dynamics should also be considered. Assigning a gendered or culturally influenced identity can shape how users perceive their abilities and authority, even if their abilities are the same.
These considerations make agent cultural alignment and bias testing essential, especially for customer-facing deployments. Ultimately, as AI agents are adopted by more and more organizations and the range of use cases expands, companies will need to develop clearer frameworks for determining how human-like these agents are.
