The patient is consulting an AI. Physicians should do the same. AI needs to be incorporated into physician training.

AI News


Ask most doctors today and they will describe this scene in some way. During the consultation, the patient says, “I asked ChatGPT about the treatment you recommend.''

A few years ago, doctors might have been furious. Today, this is the new reality. But that's exactly what tens of thousands of medical students and residents who applied to programs this fall are prohibited from doing.

As an academic physician and medical school professor, I have watched schools and health systems across the country grapple with uncomfortable truths. That is, medicine is training doctors for a world that no longer exists. There are some forward-thinking institutions. Dartmouth's Geisel School of Medicine is incorporating artificial intelligence literacy into its clinical training. Harvard Medical School offers a Ph.D. AI medicine truck. But we all have to act faster.

The numbers indicate the problem. Hundreds of medical studies are published every day in oncology alone. The volume across all disciplines has become impossible for any individual to absorb. Within 10 years, clinicians who treat patients without consulting validated, clinically appropriate AI tools will find it increasingly difficult to defend their decisions in medical malpractice lawsuits. The gap between what one person knows and what medicine collectively knows has become too large for any one person to bridge.

Our patients don't wait. They are already referring to ChatGPT or another AI chatbot before they arrive at their appointment time. They ask questions assuming that the doctor has considered options that they have not previously considered. A colleague in Boston recently told me about a patient who used a chatbot that presented three treatment options that the doctor had not originally considered. They spent 20 minutes considering alternatives together. When the doctor explained his recommendations to the patient, he noticed that her hands were shaking. The AI ​​was giving her information, but he was giving her reassurance. The AI ​​outlined the probabilities. He held space for her fear.

After a few months, she went into remission. AI helped her assert herself. But only a doctor could answer what she really wanted to know: “Will I be okay?”

This is the future of medicine. AI acts as a consultant, not a replacement. But some medical schools and health systems seem determined to prepare students for the past. Some schools are restricting the use of AI in classes and clinical writing. The Association of American Medical Colleges has restricted the use of AI in residency applications. Students and trainees often complain that they need to be proficient with AI, even though they are told they can't use it. The instinct to suppress new technology is understandable. That's also outdated. What students need instead are more useful institutional requirements.

beginning, AI verification protocol. The medical school is already holding conferences on morbidity and mortality, where doctors review cases that have caused problems. We need an AI round where students present things like: What model did you use as a reference? What did you recommend? Where did you override and why? This is a standard part of clinical training and should be documented in the medical record and reviewed by the attending physician.

Number 2, transparency standards. The Accreditation Council for Graduate Medical Education (ACGME) should require residents to document AI consultations in the same way they document expert consultations. What questions were asked? What were the answers? What clinical judgment led to your final decision? This creates an auditable trail and teaches you career-defining habits.

Third, competency assessment. Medical licensing boards should test AI literacy the same way they test pharmacology. Which models have been validated for which clinical problems? What are the known error rates? When can we trust an algorithm, and when should we question it? These are not theoretical questions. These form the basis for every treatment decision made by the learner.

Finally, Patient consent framework. Patients have a right to know when AI informs clinical decisions. Not because the technology is experimental in nature, but because transparency is part of the partnership and many deployments are still being evaluated for safety, privacy, and effectiveness. Students need practice in the following types of conversations: “I looked at clinical decision support tools that analyze thousands of similar cases. Here's what it suggests and why I agree or disagree.”

This is most important where American health care is failing. Dartmouth Health serves rural areas in New Hampshire, Vermont, and Maine, where geriatric, palliative care, and mental health professional shortages are acute. This fall, Geisel launched an AI curriculum that begins the moment students arrive. That's because medical schools recognized an important truth: If they don't teach students how to think about and use these tools, technology companies will be driving both curriculum and clinical practice. By training the first generation of clinicians to master the technology rather than fear it, we are uniquely positioned to show how AI can fill impossible gaps in underserved areas.

Through my work and research in end-of-life care, I have held the hand of dying patients, hugged families in their final moments, and sat silently when silence was the only honest response. There is no algorithm that can do this task. But AI can make us smarter and more efficient. Stethoscopes are not obsolete. There are no hands clasped. They are irreplaceable.

Thousands of students and residents are currently interviewing at medical schools and residency programs across the country. For learners: Ask questions about AI training. Ask how your education can prepare you for patients who come to you with AI-generated questions. Ask about clinical decision support tools. Think about how you can learn to become the doctor that AI cannot replace: the one who holds your hand, interprets your fears, and answers the questions beneath the questions.

To my colleagues in academic medicine: ACGME should mandate AI competency standards by 2026. Medical licensing boards must add AI literacy to the board exam within two years. Schools should replace bans or restrictions on AI with AI protocols. Our students should be trained in the medicine we will practice, not the medicine we remember.

To our patients (that is, everyone) Us): The next time you pull out your phone and mention ChatGPT, your doctor will give you a better answer than silence. Have I consulted AI tools for my diagnosis? Have I been trained to use them? If the answer is no, ask why not.

It's not a choice between human doctors and artificial intelligence. It is a contest between doctors who use all the tools available to them, technology and humans, to serve their patients, and those who face impossible challenges alone. We know the future our patients need. It is time for our medical schools and health care systems to catch up.

Angelo Volandes is a professor, clinician and researcher at Dartmouth's Geisel School of Medicine and vice chair for research in the medical department at Dartmouth Health.



Source link