image:
Giving AI agents a personality and the ability to interrupt makes them more effective in discussions
view more
Credit: Yuichi Se
In typical online meetings, people don’t always wait politely for their turn to speak. They interrupt to express strong agreement, remain silent when unsure, and let their personalities dictate the flow of the discussion. However, when artificial intelligence (AI) agents are programmed to discuss and collaborate, they are typically forced into a strict round-robin structure that suppresses this natural dynamic.
Researchers at the University of Electro-Communications and the National Institute of Advanced Industrial Science and Technology (AIST) have demonstrated that allowing AI agents to break these rules can actually make them smarter.
Their new work proposes a debate framework in which LLM-based agents are freed from a fixed speaking order. Instead, these agents can dynamically decide whether to speak up, interrupt someone, or remain silent based on their assigned personality traits and the exigencies of the moment. The researchers found that this human-like flexibility led to increased accuracy on complex tasks compared to standard models.
“Current multi-agent systems often feel artificial because they lack the messy real-time dynamics of human conversation,” the researchers explain. “We wanted to see whether giving agents social cues that we take for granted, such as the ability to interrupt or the choice to remain silent, would improve their collective intelligence.”
To test this, the team integrated the Big Five personality traits (such as openness and agreeableness) into the agent. Unlike traditional systems, where the agent generates a complete paragraph before the next paragraph begins, this new framework utilizes sentence-by-sentence processing. This granular approach allows agents to “listen” to conversations and calculate an “urgency score” in real time.
If an agent’s Urgency score spikes, they can immediately interrupt the current speaker, perhaps because they discovered an error or have important insight. Conversely, if the agent has nothing of value to add, they can choose silence to avoid cluttering the discussion with redundant information.
The framework was evaluated using the MMLU (Massive Multitask Language Understanding) benchmark. The results were clear: the “chaotic” agent outperformed the single LLM baseline in task accuracy.
Interestingly, including personality traits significantly reduced unproductive silence. Because the agents acted according to their unique characteristics, some more dominant, others more reflexive, this group reached consensus more efficiently than a typical group of rule-bound bots.
This research suggests that the future of AI collaboration lies not in tighter controls, but in mimicking human social dynamics. By helping agents navigate the friction of interruptions and the nuances of silence, developers can create systems that solve problems not only more naturally, but also more effectively.
The team plans to further apply this framework to creative and collaborative work, developing richer metrics to understand how “digital personalities” influence group decision-making.
author
Meiwa Kimura (University of Electro-Communications)
Ken Fukuda (National Institute of Advanced Industrial Science and Technology, University of Electro-Communications)
Yasuyuki Tahara (University of Electro-Communications)
Yuichi Se (University of Electro-Communications)
Research method
Data/statistical analysis
Research theme
not applicable
Conflict of interest statement
The authors declare no competing interests
Disclaimer: AAAS and EurekAlert! We are not responsible for the accuracy of news releases posted on EurekAlert! Use of Information by Contributing Institutions or via the EurekAlert System.
