Before a new medical intervention can be brought to market, it must first go through a rigorous approval process by the Therapeutic Goods Administration (TGA). For medicines, this often means providing evidence from clinical trials to prove they reduce symptoms or treat health conditions with minimal risk. But what about the AI models we use in healthcare? Hospital + Healthcare I will contact TGA to find out.
From apps that diagnose melanoma to chatbots that suggest treatments, there is a lack of AI solutions in this space.
But who decides which ones are safe, and by what standards are they evaluated for use in mainstream clinical practice?
AI approval
According to the TGA, AI falls within its powers if it is intended for the “diagnosis, prevention, monitoring, prediction, prognosis, treatment and mitigation of disease, injury or disability”.
This is treated as a medical device and is regulated. This means that its approval process is slightly different than that for pharmaceuticals and biological products.
“To get approval, [device]Australian sponsors must submit an application to the TGA and provide relevant clinical and other evidence demonstrating that the product is safe and performs its intended use. Benefits of AI models must outweigh undesirable effects [and] risk […] must be kept to a minimum,” a TGA spokesperson said. Hospital + Healthcare.
“Applicants should also outline how the sponsor will continue to monitor them going forward. [device] We are responsible for its ongoing performance and are responsible for our products while they are on the market, including recalls. ”
For AI and other connected medical devices, there are also requirements for design, development, manufacturing, testing and maintenance, cybersecurity, and data and information management.
For example, manufacturers must continually review their cybersecurity threat landscape to reduce the risk that their products will be intercepted or their data captured by malicious attackers.
Various approaches
The risk assessment approach also depends on the risk level of the AI model.
“For low-risk products, sponsors and manufacturers can self-certify compliance, but for high-risk products, an independent assessment of safety, performance and how the product is made is required,” the TGA said.
For all types of medical devices involving AI, TGA can also accept regulatory approvals from comparable foreign regulatory authorities such as the U.S. Food and Drug Administration, Health Canada, and European Notified Bodies.
The level of additional scrutiny applied to products supported by overseas regulatory approvals is based on risk and “Australia-specific requirements or concerns”.
“We are conducting increased scrutiny of some high-risk software and AI that can cause harm by providing incorrect information to patients and healthcare professionals.”
Post-market obligations
For AI models, post-market obligations are very important. Sponsors must demonstrate how they propose to manage risks, unintended bias, performance degradation, and off-label use, i.e. when AI is used for purposes not specified by the developer.
Once a product is on the market, adverse events must also be reported and recall measures must be followed in the event of a problem. This means immediately notifying end users and following strict TGA instructions.
Regardless of whether there is a problem, manufacturers must provide information and samples to the TGA upon request and report annually on the safety and performance of high-risk equipment.
The TGA may conduct post-market reviews and investigations of medical devices at any time.
“For AI, we specifically review algorithm and model design, training and testing methodologies and evidence, accuracy, sensitivity, and specificity,” the company said.
intervention is not required
The TGA does not regulate the selection of inventions in healthcare. Rather, it is largely at the discretion of hospitals and medical executives.
When determining whether an AI model is right for your organization, the Australian Healthcare Safety and Quality Commission has made several recommendations.
They argue that AI must solve clear problems, integrate with workflows, and provide benefits that outweigh the risks, including the potential for bias and inequity.
Healthcare providers should review its evidence base, discuss its use with patients, and educate themselves about its functionality.
Healthcare providers using AI must also comply with related obligations. For smaller organizations, this may mean establishing governance and processes to ensure secure implementation.
TGA approval is not a final safety check
While TGA approval is very important, it is not the ultimate check and balance. Healthcare providers must recognize their responsibilities when implementing AI.
As the Australian Healthcare Practitioner Regulation Authority states on its website, “Approval of the tool does not change the responsibility of healthcare professionals to apply human oversight and judgment to the use of AI”.
The TGA stamp does not negate the ethical issues that AI could potentially raise.
To maintain ethics, healthcare providers must be transparent with patients about the use of AI and obtain informed consent.
This means that all healthcare AI requires TGA approval, but not all TGA-approved AI is appropriate.
