How safe AI is in healthcare depends on the healthcare people

AI News


Researchers at IIT-Madras and the Institute for Translational Health Science and Technology in Faridabad are developing artificially intelligent (AI) models to predict the age of growing fetus using ultrasound photographs. The model, called Garbhini-Ga2, was trained in scans from around 3,500 pregnant women who visited Gurugram Municipal Hospital in Haryana. Each scan was labelled with different parts of the fetus, their size and their weight. This is a measurement that can be used to predict the age of a fetal child.

After training, team members tested with (unlabeled) scans from 1,500 pregnant women who visited the same hospital and around 1,000 pregnant women who visited Christian Medical College Velour. They found that Garbhini-Ga2 was making mistakes at the age of the fetus up to just half a day. This has been greatly improved by using today's most common method: Hadlock's formula. Because the formula is based on data from a white population, according to the IIT-Madras team, it is known to overlook the age of an Indian fetus for up to seven days.

The team is currently planning to test the model on datasets around India.

This only gives a glimpse into how AI tools can quietly reconstruct Indian healthcare. From fetal ultrasound dating and high-risk pregnancy guidance to virtual autopsy and clinical chatbots, it matches the accuracy of experts while accelerating workflows. However, their commitments are intertwined with systematic challenges of data and automation bias, privacy and weak regulation, often exacerbated by the sensitivity of the health sector itself.

It's helpful, but it could improve

Almost half of all pregnancies in Indian women are high-risk pregnancy (HRP). Journal of Global Health. With HRP, the mother and newborn are more likely to become ill or die. Conditions that cause these results include severe anemia, hypertension, pre-lampsia, and hypothyroidism. The risk is higher for women without formal education, rural women, and women who belong to marginalized social groups.

Experts say routine surveillance is the best way to reduce maternal and perinatal mortality in HRP. In rural areas, this task is often performed by female healthcare professionals, assistant nurses – Midwives (ANM), the first point of contact between pregnant women and the healthcare system. ANM is trained by medical professionals to recognize HRP and advise women about their options.

Mumbai-based NGO Armman has launched such a training program in 2021, in collaboration with UNICEF and the governments of Telangana and Andhra Pradesh. According to Amrita Mahale, Director of Innovation at Armman, he trains health professionals including ANM in “end-to-end management of HRP,” including ANM.

The NGO trained the ANM to track and manage HRP through “classroom training and digital learning,” adding that the ANM also supports the WhatsApp helpline “to go through learning content and apply to actual high-risk pregnancy cases” and “for suspicious resolution and handheld.”

If you are in doubt, it is recommended that ANM use the query to reach out to the trainer. However, “The trainers themselves are overworked and do not always prioritize responses to ANM queries,” Mahale said. That's why Armman adopted an AI chatbot earlier this year. It recognizes both text and speech-based queries from ANM and responds in the same medium with clinically validated answers.

Healthcare professionals now act as people in the loop who intervene when the chatbot cannot answer a question or when ANM is not satisfied with the chatbot's response,” Mahale said. Currently, chatbots being tested on 100 ANMs receive “94% positive feedback” from users, Mahale said. “Domain experts rated 91% of previous responses as accurate and satisfying.”

But she also flagged the issue: “Many speeches now.” [recognition] The model wrests with variations and accents of Indian languages, particularly regional. “This means that chatbots may not understand about 5% of queries shared as audio notes rather than text.

Kind cut

Amar Jyoti Patowary leads the forensics department of the Northeast Indira Gandhi Regional Health and Medical Science Institute. He is one of the few “virtual autopsy” experts in India.

The autopsy does not have a general reputation. When Dr. Patwley and his team asked relatives of the 179 deceased people who had been autopsied in the department, about 63% expressed fear of their bodies being amputated and delayed in carrying out funeral rituals. A similar problem has been reported from the Haryana countryside.

Virtual autopsy, or Viatopsy, scans the body with a CT and MRI machine to generate detailed images of its internal structure. Next, the computer creates a 3D image of the body. The doctor supplies this image to a convolutional neural network (CNNS) – a skilled model to extract features from one image and use them to classify other images.

In 2023, researchers at Tohoku University in Japan built a CNN that could use chest CT scans to distinguish individuals who died from other causes from own deaths. The model was accurate, with 81%, “in cases where resuscitation was performed, 92% of cases where resuscitation was not attempted.” In 2024, Swiss scientists developed a CNN that can tell if a person died from a cerebral hemorrhage based on post-mortem CT images.

A traditional autopsy takes about 2.5 hours to complete, but the fight can be completed in about 30 minutes, Dr. Patwally said.

In traditional autopsy, once the body is dissected, a second autopsy may be required if the first autopsy is inconclusive. This is difficult. However, virtual allows for the required number of dissections, as the body can be reconstructed over and over using scans.

But what the hypothesis might be overlooked is “small soft tissue injuries,” which may indicate how a person has died, as well as the changes in color of tissues and organs and how the body and its liquid smell. However, he also expressed confidence that for clinically relevant details, these challenges can be overcome by combining a “oral autopsy” to check with the accompanying relative or police officer with visual examination of the body and its cavity.

Access Control

These cases indicate that the best use of AI may be as an assistant to a healthcare professional. In 2019, Medibuddy, a digital healthcare company that offers online physician consultations and other services, experimented with AI bots that can chat with patients, extract clinically relevant details from conversations, and present them to doctors along with suggested diagnosis. Nine of the 15 doctors who tested the app remained “skeptical” for the rest, said Krishna Chaitanya Chavati, head of data science at Medibudi.

He flagged data privacy as a key concern. In India, digital personal information, including personal health information, is controlled by the 2000 and the Digital Personal Data Protection Act 2023. Neither law specifically mentions AI technology, but lawyers suggest that the latter can be applied to AI tools. Still, “the DPDP Act clearly lacks AI-driven decision-making and accountability,” the lawyer wrote in a May 2025 review.

To alleviate these concerns, Chabatty said strong data security protocols are needed. At Medibuddy, the team deployed several. Two of these are personal identifiable information masking engines and role-based access. A masking engine is a program that identifies and hides all personal information from a specific algorithm, preventing unauthorized users from tracking data to a single individual. Role-based access prevents individuals within the company from accessing all individuals associated with all their data.

In the loop

Shivangi Rai, the lawyer who helped draft the National Public Health Bill and Healthcare Bills, said “automation bias” was another source of concern. RAI is currently the deputy coordinator of the Center for Health Equity, Legal and Policy in Pune.

Automation bias is “the tendency to overly trust and follow proposals made by automated systems, even if the proposal is incorrect,” Rai said. This happens when the “human loop” of doctors and others is too many banks to make AI-powered app decisions “not just your own clinical decisions.”

In 2023, German and Dutch researchers asked radiologists with a wide range of experience to assess mammograms (mammary x-ray scans) and assign Bi-Rads scores. Bi-rads is a standardized metric radiologist used to report malignant tumors in cancer tissues observed in mammograms.

The radiologist was also told that the AI ​​model would analyze the mammogram and assign a BI-RADS score. In fact, researchers did not have such a model. They arbitrarily and secretly assigned scores to some mammograms. Researchers found that when the “AI model” reports false scores, the accuracy of the radiologists themselves was significantly reduced. Even those with over 10 years of experience reported correct BI-RADS scores in only 45.5% of such cases.

The researchers reported surprised that “even experienced radiologists were negatively affected by the judgments of AI systems,” the study's lead author said in 2023.

For RAI, this study is evidence that it is necessary to train “doctors at the limits of AI” and constantly test and reevaluate “AI tools developed and used in healthcare.”

The rapid adoption of Indian medical AI has lit the path to cheaper, faster and fairer care. However, the algorithms inherit human error and obfuscate it further. If technology does not replace ethical medicine and does not expand healthcare, Medical AI will require robust data governance, clinician training, and enforceable accountability.

Sayantan Datta is a faculty member at Krea University and an independent science journalist.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *