It’s hard to leave the topic of large-scale language models, chatGPT, and, more broadly, artificial intelligence in medicine. This reflects the number of pitches I receive from news, social media, conferences we attend (including his INVEST conference, unique to MedCity, which ended earlier this week in Chicago), and even healthcare content contributors. Above all, it is picked up everywhere.
But the fear of AI is real. And I’m not talking about an ex-machina doomsday scenario where AI gains sentience and takes over the human world. A more rational fear is its authoritative tone, its ability to present even false information as if it were true, not to mention the use of algorithms to deny care. Think deepfakes.
There is a growing realization that standards must be developed in response to the tremendous power of this new technology (some believe it will be as important as the Industrial Revolution). Not surprisingly, global institutions and companies, including the White House, are responsible for developing guidelines for responsible AI. In this episode of the Pivot Podcast, we spoke with Suchi Saria, associate professor of medicine and director of the Machine Learning and Healthcare Lab at Johns Hopkins University. She is also the CEO of Bayesian Health. Sarria has spent a lot of time researching this subject of responsible AI and how to develop a framework for implementing her AI in healthcare.