New framework facilitates collaboration between humans and AI teams

AI News


As artificial intelligence (AI) becomes integrated into critical health, safety, financial, and governance decisions, the key question is no longer whether humans and AI will work together, but how to structure this collaboration to achieve true complementarity. In a new paper, “Towards a science of human-AI teaming for decision-making: A complementarity framework,” researchers introduce a framework for understanding and designing human-AI teams for decision-making. This framework is based on collective intelligence research and focuses on reasoning, memory, and attention as core processes that can be distributed across humans and AI systems. This research paper published on PNAS Nexus provides guidance for researchers, practitioners, and policy makers looking to build human-AI teams that are effective, accountable, and aligned with human values.

The paper was co-authored by an interdisciplinary team from Carnegie Mellon University (CMU), the Massachusetts Institute of Technology, the University of Illinois at Urbana-Champaign, Microsoft Research, Harvard University, and the University of Tennessee at Knoxville.

“Organizations often frame the problem in terms of humans versus AI,” said Anita Williams Woolley, professor of organizational behavior at Carnegie Mellon University’s Tepper School of Business and co-author of the study. “The better question is how to design teams so that AI can extend what humans can notice, remember, and reason about, while humans provide context, judgment, and accountability.”

This framework articulates the socio-technical conditions that shape whether human-AI teams actually achieve complementarity. Complementarity is when a human-AI team outperforms either a human alone or an AI system alone. Conditions that lead to human-AI complementarity include details about team composition, trust alignment, shared mental models, training, and task structure.

The document also outlines design principles for achieving complementarity, including defining goals and constraints, dividing roles, coordinating attention and questioning, building a knowledge infrastructure, and establishing ongoing training and assessment. Their framework provides a common vocabulary for diagnosing where human and AI teams are likely to succeed, where they are likely to fail, and how to improve them.

The authors also point out the theoretical, practical, and policy implications of their work, emphasizing its alignment with human values, accountability, and equity.

“AI is becoming deeply integrated into collective decision-making, creating profound changes in the way decisions are made across domains, from health care and emergency response to finance, transportation, and governance,” explains Cleotilde Gonzalez, professor of cognitive decision science at CMU and lead author of the paper.

“Realizing this potential requires deliberate design, rigorous evaluation, and principled governance. Our insights provide a roadmap for building human-AI teams that are not only high-performing and adaptive, but also transparent, trustworthy, and fundamentally human-centric.”

/Open to the public. This material from the original organization/author may be of a contemporary nature and has been edited for clarity, style, and length. Mirage.News does not take any institutional position or position, and all views, positions, and conclusions expressed herein are those of the authors alone. Read the full text here.



Source link