After OpenAI and Anthropic launched a dedicated healthcare effort in January, a study published in February found that OpenAI’s ChatGPT Health had a 50% error rate and incorrectly recommended delayed care half the time in emergency test cases.
This error rate was not identified before the app was deployed, but is a symptom of a broader problem. The rapid adoption of AI systems by health systems and insurance companies often bypasses critical testing to determine how well these systems work and how safe they are for patients. This drive to expand AI in healthcare is exacerbating an existing trust crisis.
Confidence in health care in the United States continues to decline and has been exacerbated by the institutional response to the COVID-19 pandemic. A national survey of more than 443,000 U.S. adults found that trust in doctors and hospitals declined by more than 30 points from 2020 to 2024, from 72% to 40%, with trust declining across multiple sociodemographic groups. For Black, Latinx, and Indigenous communities, this breakdown overlaps with existing medical distrust rooted in the legacy and ongoing history of medical racism in the U.S. health care system. Research has shown that patients who mistrust their healthcare providers are more likely to delay treatment, such as preventive screening, or stop taking medications, and that these patterns are associated with higher rates of hospitalization and premature mortality.
This distrust is further exacerbated by the documented harm caused by AI. For example, a widely cited algorithm that affects an estimated 200 million Americans systematically underestimated the severity of illness in Black patients after using medical costs as a measure of illness. Patients were unaware that this tool was being used to determine their level of care. Medicare Advantage insurers used AI tools to double denial rates for elderly patients. Approximately 75% of denials were overturned on appeal, but less than 1% of patients appealed. The federal government then began piloting AI-powered preauthorization for traditional Medicare in six states.
Healthcare, which will account for 18% of GDP or $5.3 trillion in 2024, is heavily driven by the AI industry. In 2025, U.S. healthcare organizations spent $1.4 billion on AI tools for a variety of functions, including medical image analysis and billing and document automation, nearly triple the previous year. In addition to the potential profits, this field also provides what AI companies need to operate and often build and improve their systems: data and large amounts of it. This includes data in the form of electronic medical records, insurance claims, diagnostic images, genetic profiles, and more for hundreds of millions of Americans, often collected without meaningful transparency or input from patients or communities about how the data is used.
Data shows that the rapid adoption of AI in healthcare is exacerbating the distrust Americans already have in the healthcare system. A February 2025 survey of more than 2,000 Americans found that 66% reported having low confidence in the health system to use AI responsibly, and 58% reported being assured by the health system that AI tools would not harm them.
Neither knowledge of AI nor health literacy changed these outcomes. The most important predictor was how much someone already trusted the health care system.
Although a nationally representative survey found that most patients want to know when AI was used for diagnosis or treatment, there is no federal law requiring disclosure, and only a handful of states currently have laws addressing this. When patients are not informed about what is happening to them, about their data, and no one has to share that information, it affects all patients, but especially the communities least likely to lose trust.
Patients who experience discrimination in healthcare are significantly less likely to trust the healthcare system to use AI responsibly. Deploying AI systems without meaningfully involving patients and communities in decision-making will only repeat the patterns that led to mistrust in the first place.
What needs to change is who contributes to decisions about how AI tools are purchased, managed, and used. Patients and community members need formal decision-making roles, not just advisory positions. Before deploying AI tools, health systems and payers must publicly report their performance, including across different racial/ethnic groups. If AI is to be used for treatment, it must be clearly communicated to patients upfront. These are the basic conditions for a reliable system.
Health systems and businesses can make a variety of choices to earn the trust of their patients and the communities they serve. They have the ability to move fast. Even more difficult tasks can be completed at reliable speed. This means patients and community members have a say before these systems are purchased, rather than after the harm has occurred.
Dr. Oni Blackstock, MHS, is a physician-researcher, founder and executive director of Health Justice, and a Public Voices Fellow on Public Interest Technology for the OpEd Project.
