>> Artificial intelligence has established itself in all kinds of scientific fields.
Perhaps nothing promises to save lives like medicine.
The program is learning to answer medical questions and diagnose diseases.
However, there are still issues to be resolved.
I talked to Dr. Isaac Rouhani, Editor-in-Chief of the New England Journal of Medicine AI and Chair of the Department of Medical Informatics at Harvard University.
I asked about the potential of AI in medicine.
>> Doctors can definitely use AI as an implementation.
They will remind you of everything you need to know about your patients and those like you.
At the same time, patients are well aware of the shortage of primary care physicians in the United States.
I have very little time with a doctor who is always available.
Giving them another resource to get medical advice if they are deficient so they can decide whether or not to seek medical advice would probably be transformative as well.
When doctors see me, they forget many details about me.
What if you could have a summary of what is medically important to know today, and you could do it today?
And I said what are the other patients like me, what is the correct treatment.
Is he receiving the correct treatment?
Are there any screening tests to be done on him during today’s visit?
Medicine is becoming more and more complex.
What else do we know about preventive medicine?
Prevention is becoming less and less popular in many ways.
Doctors are very busy.
These AI programs are not perfect.
they can make mistakes.
But at least it’s a conversation you can have and decide if you should see a doctor.
>> How far are these things?
>> At the moment they are very incomplete.
However, they are still used today.
The reason they are used is because there is a huge need.
We have all heard of the so-called Dr. Google.
People are already using search to get medical advice — when CHATGPT was launched by OPENAI this winter, patients rushed to use it.
Doctors have also started using it.
30% of healthcare costs are administrative expenses.
Claims processing, obtaining approval, and determining whether to issue a reimbursement include decisions about how appropriate a treatment is, whether a patient should be allowed, and medical judgment.
>> I’ve set the deficiencies and incomplete counts, what should the patient look for?
Are there red flags?
>> In some cases, it can be very subtle.
It is clear that I am referring to authoritative sources.
Please check the source.
These programs are known to construct citations.
Google and Microsoft are working hard to eliminate them by having a second independent program.
I would never do anything dangerous without first checking with a medical institution.
Changing your medicine doesn’t help.
It is said that the medicine may not be suitable.
This helps you have a conversation.
I wouldn’t do that.
The other, using human common sense, turned out to be a very useful filter.
>> Going further, what are the promises of AI?
What do developers and others expect to be able to do in the future?
>> I don’t think doctors are spending enough time with their patients.
They spend a lot of their time as bureaucrats.
By letting AI take over that bureaucratic role, doctors will be able to interact more with their patients.
Long-term hope doesn’t mean 20 years, it’s about 5 years.
These programs can view all data subject to appropriate privacy provisions.
And actually come up with new biomedical insights.
Potential for new treatments.
A patient group that could benefit from these treatments and actually accelerate the drug discovery process as well.
Because the same limitations we are talking about about humans as doctors ultimately affect humans as life science researchers as well.
they can’t know everything.
They cannot know all the discoveries made at once.
These programs are very useful for knowing everything.
>> Is that the biggest potential pitfall, too much dependency?
>> That’s just a pitfall.
We want to trust our common sense.
I’m not convinced, and others have always found it common sense.
Even so, ultimately we have to stay true to our values.
We believe that at this time we cannot rely on these programs for shared values.
The most important thing they can do is tell you what data these models are being trained on.
For example, I don’t know if it accurately represents the problems American patients have, or if you’ve seen data from India.
We do not know what data are used in these treatments, leaving great uncertainty about their quality and applicability to populations.
>> Dr. Isaac Ohana, thank you very much.
>> I am very happy to talk with you.
