Advantages and disadvantages of ChatGPT Health from AI medical experts: For journalists

AI News


CHICAGO — OpenAI this week introduced ChatGPT Health, “ChatGPT's exclusive experience designed for health and wellness,” in response to the more than 40 million people who ask ChatGPT health-related questions every day, the company said.

Northwestern University’s AI in Clinical Medicine Expert Dr. David Leibovitz You can talk to the media about the pros and cons of your new platform. These include not only that it's a “huge step forward from patients showing up in Google searches,” but also that, unlike conversations with doctors and therapists, “patients need to understand that health data shared on ChatGPT is not protected by HIPAA.” He will also be able to discuss what the true democratization of medical AI looks like and what Northwestern's research is driving to make these advances practical for patients.

Contact Kristin Samuelson. ksamuelson@northwestern.edu Schedule your interview.

Leibovitz is co-director of the Artificial Intelligence Medical Research Institute in the Center for Medical Education in Data Science and Digital Health at Northwestern University Feinberg School of Medicine. He has been teaching clinical informatics for decades, incorporating new methods for teaching and applying AI in clinical patient care. Mr. Leibovitz has served as chief medical information officer at two organizations that have aggressively implemented AI in clinical medicine.

On this occasion:

Leibovitz: “Currently, the 21st Century Cures Act requires health systems to provide patients with complete access to their medical records through standardized application programming interfaces (APIs), and electronic health record vendors such as Epic are required to provide that API. ChatGPT AI tools like Health help patients make sense of that data. At essentially no additional cost, patients can receive help understanding test results, preparing questions for appointments, and identifying gaps in care that may have been missed.”

“An important step forward”

“The Institute of Medicine's report “To Err is Human: Building a Safer Health'' It's been more than 25 years since the System documented tens of thousands of preventable deaths due to misdiagnoses and gaps in care, and we still haven't solved the problem. AI assistants that can review a patient's entire medical history and flag potential concerns represent a big step forward from what patients see on Google Search. Rather than triggering alerts based on individual symptoms, these tools synthesize information in context.

Regarding concerns:

“Patients need to understand that health data shared on ChatGPT is not protected by HIPAA. Unlike conversations with a doctor or therapist, there is no legal privilege. This data can be subpoenaed in a lawsuit or accessed through other legal proceedings. For sensitive health issues, especially reproductive and mental health issues, that is a real consideration.”

The big picture is:

“The question is not whether patients will use AI for health information. 40 million people already ask ChatGPT health questions every day. The question is whether we can help patients ask questions more effectively and safely, with the right guardrails and realistic expectations about what these tools can and cannot do.”

For local/on-device models:

“There is another approach that avoids privacy concerns entirely: running the AI ​​model locally on the patient's own device. Modern smartphones have enough processing power to run capable language models without any data leaving the phone. No cloud storage, no corporate servers, and no risk of subpoena.”

About the technological trajectory:

“On-device AI capabilities, which run AI directly on local hardware such as mobile phones and wearables, rather than sending data to the cloud, are rapidly advancing. Apple Apple's proprietary approach with Intelligence validates that sophisticated AI can be run locally, and the open source model, optimized for mobile hardware, is improving month by month.In a year or two, patients will be able to analyze their downloaded medical records in complete privacy.

From a democratization perspective:

“This is what true democratization of medical AI looks like: Patients can download their records using APIs that health systems are required to provide, run the records through an AI model on their phone, and gain personalized insights without their data ever touching a third-party server. There are no subscription fees, no privacy tradeoffs, and no dependencies on corporate policies or terms of service.”

About what Northwestern University is researching:

“Our research group is actively exploring ways to make this practical for the general public. The technical elements are in place, including access to standardized medical records, powerful mobile hardware, and increasingly capable open source models. The goal is to give everyone access to a meaningful second opinion on their health data, while keeping their data completely under their control.”



Source link