How Doctors Use ChatGpt and Other AI to Avoid Burnout?

AI For Business


Adam Rodman uses ChatGpt when he is confused.

The internal medicine doctor and Harvard Medical School professor said they would be present in advance about it with patients and avoid putting in private medical information.

“We did it once, so the patients themselves were typing and providing additional information to the chatbot,” Rodman said. “It was a three-way conversation with the two of us on ChatGpt.”

Rodman is one of more than 12 medical professionals. People who spoke to business insiders If there is an AI tool to use, which task is the one that you're interested in choosing. Many use generic models like ChatGpt, but they may not be fully specialized.

Others choose to bring medical technology startups. According to Rock Health data, in the first half of 2025, AI-enabled health startups raised 62% of their digital health venture funding. The period saw $6.4 billion. Venture funds have been poured into digital health companies compared to $6 billion in the first half of 2024.

Generator AI is still young, and many of these tools are not yet facing important field testing, leading many hospitals to extensively try these tools in-house and pilot them. These tests look for medical ethics. Also, whether the tools are useful in the first place.

Many health professionals have yet to choose not to get into technology at all. According to the Elsevier Health Survey, 48% of doctors report using AI tools for their jobs. However, the number has increased rapidly, at 26% year-on-year.


Dr. David Chang

SOMOS community care pathologist David Zhang.

Bi's Corrie Aune



The doctors we spoke to had in common with almost all of them using AI devices to automate one of the most repetitive parts of the job: notes. Beyond that, they changed dramatically where they thought AI would be useful.

Many doctors use ChatGpt as well as writing emails.

The AI ​​Giants have placed themselves in the medical market. At the recent Federal Reserve Conference, Openai CEO Sam Altman said ChatGpt is “a better diagnostician than most doctors in the world.” Google and Microsoft have specialized medical models. Microsoft's model was four times more accurate than human diagnosticians in solving case studies, said AI CEO.

Most doctors who spoke to Business Insiders used generic chatbots in their practice. Some used enterprise-level models that were unable to train the data, while others also used He said he avoided supplying sensitive information to the system.

John Brownstein said that Boston Children's Hospital, where he is the Chief Innovation Officer, has launched a HIPAA-compliant ChatGpt to protect protected health information. Almost 30% of employees use it now. He listed several examples: generating reference characters, quantitative data analysis, or combine via internal documents.

Brownstein has also worked with AI startups, but his bars of these tools continue to rise.

In the age of AI coding tools, only a handful of films of engineers once could be more and more successful. So Brownstein said hospitals can make their own versions in-house. Also, some medical technology startups are trying to use AI as a “hook,” Brownstein said.

How many of these AI startups are selling him? “It's like dozens of times a week,” Brownstein said. “No mercy.”

During our call, SOMOS community care pathologist David Zhang showed ChatGpt's response to a prostate biopsy. The chatbot said the biopsy image was consistent with ductal adenocarcinoma, a type of pancreatic cancer. The diagnosis was “not accurate,” he said.

“That's why we're doing our safe work,” Zhang said. “It can make the basic model even better.”


Dr. David Chang of Computers

Zhang gave ChatGpt a prostate biopsy. The chatbot diagnosis was “not accurate.”

Bi's Corrie Aune



For doctors looking for more than the average ChatGPT response, they may be looking for more specialized skills. There are hundreds, if not thousands, of them to choose from. There are currently 1,247 entries on the FDA list of AI-enabled medical devices.

“It was supposed to be that AI came in and help streamline things,” said dentist Divian Patel. “What that happened was a thousand portals, a thousand logins and no one wants to use it.”

Patel bets that his fellow dentist will be willing to pay to simplify. He and another dentist, Shervin Molayem, co-founded Trust AI. The company ostensibly combines many of these AI services on a single platform, raising $6 million in seed funding per Pitchbook.

Rebecca Mishlis, chief medical information officer at Mass General Brigham, said she is trying to avoid having “shiny object syndrome” when determining the generative AI tools she encounters at her desk.

“They are offering solutions to problems I don't have,” Mishlis said. “Maybe I have it now, but I don't have it today.”


Dr. David Chan's hands on computer mouse

Chang said doctors were doing “safe work” because they could improve the basic model.

Bi's Corrie Aune



Alan Weiss, SVP of Clinical Advances at Hospital System Banner Health, said how many of his vendors are throwing new AI tools was “almost overwhelming.” He notes that he has two independent groups a tool to review two independent groups for both ethical and clinical concerns.

Like many other industries, hospitals are trying to use AI for their management tasks. A doctor who spoke with Business Insider reported that he would use AI to check in patients and process health insurance claims.

These tools don't always live up to their hype. Last year, Rodman said the industry was buzzing about the AI-written message to patients. He said there have since come out that demonstrated that some doctors spent more time than writing messages manually.

“All of them made me feel pretty bad,” Rodman said. “It's a field where there was a lot of excitement that didn't live up to its promises.”

The doctors' business insiders interviewed range from excited AI practitioners to skeptics. However, most doctors agreed to one tool: the power of surrounding listening devices.

AI-powered memo reduction devices listen to conversations between doctors and patients and summarise insights into one document. Mishlis said the device has “real change power.”


Dr. Carl Dark of his practice.

Carl Dirks said ambienting would limit “cognitive drainage” in note-taking.

BI's Chase Castor



Carl Dirks, an internist at St. Luke in Kansas City, called Ambient the solution to “clinical burnout.” Some primary care doctors told him the tool expanded their careers by limiting the “cognitive drainage” of fast memotaking.

“We are really trying to restore human-human connection,” said Philip Payne, a colleague at BJC Healthcare and a top hygiene AI officer. “How can I keep the computer out of the way so that the provider and patient have a conversation, rather than sitting behind the keyboard and typing all the time?”

There are no official rules or guidelines regarding how to disclose the use of surrounding listening devices. However, doctors are legally obligated to disclose the use of the device in states that have two-party record laws. Some hospitals have disclosure policies for all providers.

The patient must then determine whether he is used to chat recorded and processed by the AI. The quality of disclosure here is important here: In a July survey of 121 ambient documentation pilots, nearly 75% of the patients surveyed were satisfied with the use of the technology.

As a psychiatrist, Farhan Hussein's notes are rather extensive. He loved using surrounding listening devices in his previous work, but he no longer has access to his new role at the telehealth company. He misses it.

“If not, we're really taking notes all the time,” Hussein said. “Damn it, I just didn't go to medical school to become a scribe.”

AmbiEntrying is also full of venture capital. In July, Ambience Healthcare raised $243 million in Series C funding backed by Oak HC/FT and Andreessen Horowitz. Nabla raised a $70 million Series C in June, led by HV Capital.


Dr. Carls Dark

Darks' colleague Philip Payne said ambienting will help “restore human-human connections.”

BI's Chase Castor



The number of well-funded surrounding businesses offers additional options for physicians. Brownstein uses Abridge, Hussain uses Nabla, and Weiss currently pilots four different surrounding companies.

Francisco Lopez Zimenez considers cardiology to be perhaps one of the most technical fields of medicine due to his fellow background in physics and computer science.

“Heart disease has really been at the forefront of innovation,” said Lopez Zimenez, a cardiology community member at Mayo Clinic. “This is one of the areas in medicine that began using and developing AI, and there is no doubt.”

Lopez-Jimenez quickly admits bias. After all, he himself is a cardiologist. Pierre Elias, medical director of artificial intelligence at New York Presbyterian Hospital and yet another cardiologist, agreed.

“If you look at the FDA clearance of AI technology, it's radiology and heart disease that led the way,” Elias said. “But radiologists are not as organized with patients as cardiologists.”

Within the broad swath of medicine, the Gulf appears between the eager and the skeptical. According to Elsevier Health, 48% of clinicians surveyed said they I use AI in practice, but 24% use no technology at all, even in non-work settings.

On the other side of the Gulf Coast, doctors reject AI in their faces. Maybe they are worried about losing ethics and tact. In fact, the August survey asked doctors who regularly use AI to assist with colonoscopy. Before AI could use the adenoma, detection was 28.4%. It then fell to 22.4%.

Jonathan Simon, an internal medicine physician at BayHealth, said there was only “one area” using AI: Note-Taking. He has attended several AI research consultations that he has intrigued, but he has not touched on popular models like ChatGpt or emerging Med Tech players in his practice.

Simon worried about improving efficiency. He realized that faster diagnosis means faster rates of seeing patients. This is beneficial to doctors.

“In the desire to throughput as many patients as possible, that's where the money is, so the industry really has to focus on responsible and consistent use of AI,” he said.

“While mistakes may be rare, rare mistakes can destroy someone else's life.”





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *