Medical devices that leverage the power of artificial intelligence or machine learning algorithms are rapidly transforming healthcare in the US, with the Food and Drug Administration already approved the marketing of over 1,000 such devices and even more in their development pipeline. University of Illinois Urbana – A new paper from the University of Champaign argues that experts on AI ethical and legal challenges and healthcare big data must improve the regulatory framework for AI-based medical devices, ensure transparency and protect patient health.
Richard W. & Marie L. Corman's Law School Sara Gelke says the FDA must prioritize the development of labeling standards for AI-powered medical devices in much the same way that packaged foods have nutritional fact labels.
“Current labeling criteria for AIL or machine learning-based medical devices are obstacles to transparency in that they prevent users from receiving important information about the device and its safe use, such as race, ethnicity and gender breakdown of the training data used,” she said. “One potential remedy is that the FDA learns valuable lessons from food nutrition labeling and can be applied to the development of AI-enhanced labeling standards for medical devices.”
Promoting greater transparency around AI-based medical devices is complicated not only by the various regulatory issues surrounding AI, but also by what constitutes medical devices in the eyes of the US government.
If something is considered a medical device, “The FDA has the power to regulate that tool,” Gerke said.
“The FDA has the authority from Congress to regulate medical products such as drugs, biology and medical devices,” she said. “With a few exceptions, products equipped with AI or machine learning and intended to be used to diagnose, treat, mitigate, treat, or prevent disease are classified as medical devices under federal food, drug and cosmetic conduct. As such, the FDA can assess the safety and efficacy of the device.”
If you test the drug in a clinical trial, you will “have a greater confidence that it is safe and effective,” she said.
“Current labeling criteria for AIL or machine learning-based medical devices are obstacles to transparency in that they prevent users from receiving important information about the device and its safe use, such as the race, ethnicity and gender breakdown of the training data used,” Gerke said. “One potential remedy is that the FDA learns valuable lessons from food nutrition labeling and can be applied to the development of AI-enhanced labeling standards for medical devices.”
However, Gerke noted that there are few clinical trials of AI tools in the United States.
“Many AI-powered medical devices are based on a subset of deep learning, machine learning, and are essentially “black boxes.” “If not impossible for humans to understand why a tool made certain recommendations, predictions, or decisions,” she said. “Algorithms can be adaptable when unlocked and therefore can actually be more unpredictable than drugs that are placed in rigorous testing or clinical trials.”
It is also difficult to assess the reliability and effectiveness of new technologies once implemented in hospitals, Gerke said.
“It usually depends on the patient population and other factors, so tools need to be re-adjusted before deploying to a hospital, so it's much more complicated than connecting and using it with a patient,” she said.
The FDA has not yet allowed the marketing of generative AI models similar to ChatGPT, but it is almost certain that such devices will be released in the end, and it will be necessary to disclose to both medical practitioners and patients that such products will generate AI, says Gerke, a professor at the European Union Centre at Illinois.
“The results generated from these devices need to be clear to practitioners and patients simply because we are still in the early stages of technology.
According to Gerke, the big point of this paper is that not only regulators like the FDA needed to develop “AI Facts Labels,” but also to develop “frontline packages” AI labeling systems, but also to first argue.
“Using the frontline AI labels of the package as a complement to the AI Facts labels will further enhance user literacy by providing glass-enclosed, easy-to-understand information about medical devices and allowing them to make informed decisions about their use,” she said.
In particular, Gerke claims two AI fact labels. One is primarily for healthcare workers, and the other is for consumers.
“In summary, a comprehensive labeling framework for AI-powered medical devices must consist of four components: two AI fact labels, one package AI labeling system, the use of modern technologies such as smartphone apps, and additional labeling,” she said. “Such frameworks include simple things like “trusted AI” symbols, instructions for use, patient fact sheets, and labeling AI-generated content. All of these enhance user literacy regarding the benefits and pitfalls of AI in much the same way that food labeling provides consumers with information about food nutritional content. ”
The recommendations in this paper are not exhaustive, but should help regulators begin thinking about the “challenging but necessary task” of developing labeling standards for AI-powered medical devices, Gerke said.
“Using the frontline AI labels in packages to complement AI fact labels can further enhance user literacy by providing glass-walled, easy-to-understand information about medical devices and making more detailed decisions about their use,” Photo by Fred Zwicky
“This paper establishes a relationship between the front nutrition labeling system of the package and AI promises, and is the first to make concrete policy proposals for a comprehensive labeling framework for AI-based medical devices,” she said.
This paper was published by the Emory Law Journal.
This study was funded by the European Union.
/Public release. This material of the Organization of Origin/Author is a point-in-time nature and may be edited for clarity, style and length. Mirage.news does not take any institutional position or aspect, and all views, positions and conclusions expressed here are the views of the authors alone.