In cooperation with
 |
● |

|
|
|
|
|
|
Oncologists who use or are considering using AI tools tend to agree on three points regarding ethics: first, AI models must be explainable by oncologists; second, patients must consent to the use of AI in treatment decisions; and third, it is the oncologist's responsibility to protect patients from AI bias.
The findings come from a research project conducted at Harvard Medical School and published this spring. JAMA Network Open.
Andrew Hantel, MD, and colleagues report that 204 randomly selected oncologists from 37 states responded to a survey. The team's key findings include:
- When faced with an AI treatment recommendation that differs from their own, more than one-third (37%) of respondents said they would let the patient decide which treatment to choose.
- More than three-quarters (77%) believe oncologists should protect patients from AI tools that may be biased, such as when models are trained using a narrow range of data, but only 28% are confident they would be able to recognize such bias in a specific AI model.
In the discussion section, Hantel and his co-authors highlight their findings that responses regarding decision-making “were sometimes contradictory; patients were not expected to understand the AI tool but were expected to make decisions related to the recommendations generated by the AI.”
Furthermore, the researchers highlighted a gap they found between oncologists' responsibility and readiness to combat AI-related bias.
“These data characterize potential barriers to the ethical implementation of AI in cancer care.”
Now, a new academic paper has been published investigating what the results mean.
In “Key Questions Facing the Adoption of AI in Cancer Care,” science writer Mike Fillon spoke with Hantel and Shiraj Sen, MD, PhD, a Texas Oncology clinician and researcher who was not involved in the Harvard oncologist study.
This piece was posted on July 4th. CA: Cancer Journal for Cliniciansthe American Cancer Society's flagship journal. In it, Sen said AI tools in oncology are “moving in three main directions.”
1. Treatment decisions.
“Fortunately for patients, the advent of new therapies means oncologists have multiple treatment options available to individual patients in specific care settings,” Sen said. “But often these therapies are not well studied.” Read more:
“AI tools that help incorporate prognostic factors, various biomarkers and other patient-related factors may soon be useful in this scenario.”
2. Radiological response assessment.
“Clinical trials using AI-assisted tools for radiological response assessment to anti-cancer treatments are already underway,” Sen points out.
“In the future, these tools may help characterize tumor heterogeneity, predict treatment response, assess tumor aggressiveness, and guide personalized treatment strategies.”
3. Identifying and evaluating clinical trials.
“Fewer than 1 in 20 cancer patients participate in clinical trials,” Sen noted. “In the near future, AI tools may be able to help identify clinical trials that are right for individual patients and assist oncologists in preliminary assessments of which trials patients may be eligible for.”
“These tools will help improve efficient access to clinical trials for patients with advanced cancer and their oncologists.”
Meanwhile, Hantel says: Canada The widespread confidence in identifying bias in AI models “highlights the urgent need for systematic AI education and ethics guidelines in oncology.”
For oncology AI to be implemented ethically, infrastructure needs to be developed to support the training of oncologists, with transparency, consent, accountability and fairness on the checklist, Hantel added.
Hantel said it's equally important to understand patients' perspectives on the same issues, especially those from historically marginalized and underrepresented groups.
“We need to develop and test the viability of an ethical foundation for deploying AI that maximizes benefits and minimizes harms. [we need to] Educate clinicians about AI models and the ethics of their use.”
Both journal articles are available in full free of charge.
|
|
| |
|
|
|
|
|
|
|
|
|
Hot topic events over the past few days.
- The AI bot is probably trying to scrape the very website this article is on. Why wouldn't you? These tiny bots constantly patrol the internet looking for new content to consume. Many are craving new data to train their generative AI models. And AIin.Healthcare, like any decent news site, is updated pretty frequently. Of course, hungry bots are part of the online ecosystem we live in. Most bots can be dealt with by a quick tweak of your security settings. Other bots, however, disguise both their identity and their intent. They get in through what are called unlocked backdoors. To deal with them, you can fight AI with AI. This is something blue-chip companies offering content delivery network services, domain name services, and more have started to do in earnest since GenAI began sneakily equipping them with tools to bypass website security settings. This month, Cloudflare, the most widely used of these defensive services, upped the ante by adding a one-click option to block all AI bots. The feature is also open to Cloudflare's free-tier customers. All of this may sound like geeky stuff you can skip over, but the invisible battle of these AI bots certainly reaches deep into healthcare. Plus, as Axios breaks down, it's pretty interesting stuff.
- Looking at the patents filed by GenAI over the past decade, Chinese inventors are way ahead of their competitors in any other country. According to a new report from the World Intellectual Property Organization, 38,210 applications will come from China between 2014 and 2023. The United States is a distant second with 6,276 applications, followed by South Korea (4,155), Japan (3,409), India (1,350), the UK (714) and Germany (708). The report notes that generative AI includes large-scale language models, but also generative adversarial networks (GANs), variational autoencoders (VAEs) and diffusion models. The report authors say that ChatGPT's success has “driven innovation across a wide range of applications,” so don't be surprised if a new wave of GenAI patents floods the field in the near future. “A later update study, perhaps working with GenAI itself, should be able to visualize this development.” The report is extensive and analyzes a lot of data, some of which is related to healthcare. Pymnts.com does a good job of contextualizing and summarizing it:
- Can you imagine it regularly costing $10 billion to train a GenAI model? Dario Amodei, CEO of Anthropic, says it's possible. In fact, he predicts that training costs could reach $100 billion in three years. For perspective, consider that current models such as ChatGPT-4o “only” cost about $100 million, Amodei said in a recent podcast. Some models currently being trained have already cost nearly $1 billion to train, he added. And with rapid improvements in algorithms and chips, by the time that ridiculously high price tag is reached, it's likely that “we'll have models that are better than most humans in most respects.” Read Tom's hardware article here.
- Amazon has developed a GenAI assistant to help healthcare organizations generate marketing messages. According to a blogger from AWS' Generative AI Innovation Center, this was a challenging challenge because medical content is “highly sensitive.” It often takes a significant amount of time to draft and have it reviewed and approved by layers of experts. This is generally much longer than marketing materials in industries where the end customer is not the patient themselves. The key question AWS developers wanted to test was whether large-scale language models could streamline the tedious draft-to-publish process of medical marketing. Their key finding was that “medical content generation for disease awareness is a key example of how LLMs can be leveraged to generate curated, high-quality marketing content in hours instead of weeks.” The blogger makes up for the intermediate steps here:
- When Altman met Ariana. Sam Altman's OpenAI Startup Fund is joining forces with Thrive Global, a behavior change platform supplier founded by Arianna Huffington, to launch what the pair call a “hyper-personalized AI health coach.” Called Thrive AI Health, the new venture's mission is to “democratize” access to expert-level health coaching. Joining the duo as lead investor is the Alice L. Walton Foundation. time On Monday, Altman and Huffington said that personalized behavior change through AI “offers a chance to finally reverse chronic disease trends,” and in the process “benefit millions of people around the world.” High-profile healthcare AI ventures are garnering attention with early success.
- Healthcare AI is welcomed and appreciated in Africa. This can be seen in the work of Dr. Sylvester Ikisemojie, a doctor at the National Orthopedic Hospital in Lagos (population 16.5 million), who published an eloquent article in a Nigerian daily newspaper. punch July 7. “The emergence of AI in health care brings both great opportunities and significant challenges that must be approached with care, compassion, and ethical consideration,” he writes. He urges readers to continue to cultivate these qualities in themselves and others, and to “ensure that our pursuit of innovation is guided by a deep commitment to human flourishing and the alleviation of suffering.”
- Accountants need AI, too. They just don't know it yet. This seems to hold some lessons for those involved in healthcare AI.”[B]”This is because adoption of AI in the accounting industry is still very low,” accounting expert Shane Westra wrote. CPA Practice Advisor“If they approach AI in the right way, companies of all sizes and with diverse business models have a huge opportunity to be at the forefront of the AI transition across industries, gain a 'first mover' competitive advantage and gain significant momentum.”
- Educators do too, although they may not be using it to grade papers. This use case is dividing early adopters. The Wall Street Journal. “this is [GenAI] “Will it make my life easier? Yes,” says a high school history teacher. “But that's not the purpose. It's to improve students' writing.” The co-founder of the AI Education Project counters that “AI shouldn't be used for grading,” and that doing so would “erode trust in the education system.” Subscribers are invited to continue the conversation in her colorful style in the comments box. Read the article here.
- Summary of recent research:
- Notable funding news:
|
|
| |
|
|
|
|
|
|