A first-of-its-kind study led by researchers at the Center for Addiction and Mental Health (CAMH) has found that artificial intelligence (AI) models used to predict aggressive incidents in acute psychiatric care can reinforce and widen existing social and structural inequalities by overestimating the likelihood of aggression among already marginalized groups. Recently published research results are npj mental health researchhighlights the importance of careful evaluation to ensure that AI tools do not perpetuate harm and promote more equitable care in clinical practice.
“While the fairness of clinical AI tools has been evaluated in other fields, this study highlights a critical gap in mental health care, given that the assessments used to train AI models are often based on subjective observations shaped by underlying social and structural biases,” said Dr. Marta Masley, staff scientist at the Krenville Center for Neuroinformatics (KCNI) and senior co-author of the study. “Without built-in fairness, clinical use of AI models can lead to significant distress, loss of trust, and even provoke offensive incidents that would not otherwise have occurred. It is clear that we need to develop AI applications that center and promote fairness.”
Findings highlight the importance of equity analysis
Several health systems in the Netherlands, Switzerland, China, the United States, and Canada are evaluating or considering the use of AI models to predict aggressive or violent behavior, allowing for early intervention and targeted de-escalation. However, little research has examined whether these tools work equitably across patient populations, particularly in psychiatry, where social and structural factors strongly shape care experiences.
To address this gap, the research team trained a machine learning (a type of AI) model on the electronic medical records of more than 17,000 CAMH inpatients to examine how prediction errors vary by intersecting social and demographic factors, such as race, gender, and social background. The model showed clear bias, with higher false-positive rates reported for Black and Middle Eastern individuals, men, patients brought to the emergency room by police, and people with unstable or supportive housing arrangements. These findings suggest that this model may shape clinical decision-making in ways that disproportionately flag already over-surveilled and structurally disadvantaged groups as high risk, further exacerbating inequities.
Advancing unbiased AI in mental health care
The findings highlight that fairness is not a secondary consideration, but a core requirement for safely deploying AI in mental health settings. This research is part of CAMH’s broader commitment to lead responsible, patient-centered implementation of AI in mental health care based on ethics, transparency, and trust.
As part of this effort, it was co-led by the KCNI Predictive Care Lab. Laura Sikstrom and Marta Maslej are conducting research to better understand and address the real-world impact of AI in mental health care. The lab aims to leverage award-winning computational ethnographic approaches to identify and address potential harm while designing AI systems that promote equity and improve outcomes for individuals and communities. Directly based on the results of this study, the team recently secured funding from the Canadian Institutes of Health Research (CIHR) to co-design a next-generation AI tool called FARE+ that aims to identify drivers of biased predictions and inform strategies to mitigate them, driving more equitable and clinically meaningful risk assessments.
“By moving away from binary risk prediction to more patient-centered tools, we have the potential to use AI to redress historical and ongoing inequalities in our health care system,” said Dr. Laura Sixstrom. “By moving from predicting individual risk to detecting collective bias, this research advances a new paradigm for AI in mental health care – one that prioritizes equity, health equity, and the well-being of both patients and staff.”
The study was led by Yifan Wang, a former KCNI research trainee and current medical student at the University of Ottawa, in collaboration with a KCNI senior investigator, and was supported by an SSHRC Insight Development Grant and a Google Award for Inclusion Research.
sauce:
Addiction and Mental Health Center
Reference magazines:
DOI: 10.1038/s44184-026-00194-6
