EMA and FDA collaborate on framework for using AI in drug development

Applications of AI


In an important step in the regulatory oversight of computational tools in the pharmaceutical industry, the European Medicines Agency (EMA) and FDA have established 10 guiding principles aimed at supporting safe and ethical products. Application of AI It occurs throughout the entire pharmaceutical product chain, from early laboratory research, to the manufacturing site, and finally to post-market oversight (1, 2). This collaboration provides the necessary framework to manage the complexity and dynamic nature of these tools, ensuring that the evidence produced is accurate and reliable.

What are the 10 guiding principles established for pharmaceutical AI?

This collaborative initiative identifies specific areas where drug developers and technology standards organizations can coordinate their work (2). These principles are:

1. Human-centered design: The development and use of AI technology is consistent with ethical and human-centered values ​​(2, 3).

2. Risk-based approach: The development and use of AI technologies follows a risk-based approach with appropriate validation, risk mitigation, and monitoring based on the context of use and determined model risks (2, 3).

3. Compliance with standards: AI technologies comply with relevant legal, ethical, technical, scientific, cybersecurity, and regulatory standards, including good practices (GxP) (2, 3).

4. Clear context of use: AI technologies have a well-defined context of use (the role and scope of why they are used) (2,3).

5. Multidisciplinary expertise: Multidisciplinary expertise covering both AI technology and its context of use is integrated across the entire technology lifecycle (2, 3).

6. Data Governance and Documentation: Data source origins, processing procedures, and analytical decisions are documented in a detailed, traceable, and verifiable manner in line with GxP requirements (2, 3).

7. Model design and development practices: The development of AI technologies follows best practices in model and system design, software engineering, and leverages data that are appropriate for use, considering interpretability, explainability, and predictive performance (2, 3).

8. Risk-based performance assessment: Risk-based performance assessment evaluates the entire system, including human-AI interactions, using data and metrics that are appropriate for the intended use, supported by validation of predicted performance through well-designed testing and evaluation methodologies (2, 3).

9. Lifecycle Management: Risk-based quality management systems are implemented throughout the lifecycle of AI technologies, including supporting problem capture, assessment, and response (2, 3).

10. Clear and important information: Plain language is used to present clear, accessible, and contextually relevant information to target audiences, including users and patients, regarding the usage, performance, limitations, underlying data, updates, and interpretability or explainability of AI technologies (2, 3).

How will these standards impact drug development and manufacturing?

The implementation of these principles is expected to facilitate more efficient pathways for both traditional medicines and biologics, as the term medicine in this context encompasses both categories across different jurisdictions (1-3). In a message to members of the American Society of Pharmaceutical Scientists (4), Dr. Mark Arnold, owner and principal of Bioanalytical Solutions Integration, writes: “After several months of cooperation, today the FDA and the European Medicines Agency… [AI] Generate evidence across all stages of the drug lifecycle. The integration of AI in drug development has the potential to transform the way drugs are developed and evaluated, ultimately improving healthcare. By improving predictions of toxicity and efficacy in humans, AI technology is expected to help foster innovation, reduce time to market, strengthen better regulation and pharmacovigilance, and reduce reliance on animal testing. ”

This collaborative principle means that future regulatory filings related to AI will likely require tighter multidisciplinary integration and documentation of data provenance to meet Good Manufacturing Practice expectations. “As an advisor to an AI company and a user of AI software, these are 10 short and clear guides,” Arnold said. By adhering to these standards, companies can better prepare for future jurisdictional guidelines while contributing to a global innovation environment that prioritizes patient safety. European Commissioner for Health and Animal Welfare, Oliver Verhelyi, said in an EMA news release: “The Guiding Principles on Good Practice for AI in Pharmaceutical Development are the first step in new EU-US cooperation in the field of new medical technologies. The Principles are a good example of how we can work together on both sides of the Atlantic to maintain our leading role in the global innovation race, while ensuring the highest levels of patient safety.” (1) This fundamental research is expected to evolve with technology, with a focus on proven quality, efficacy, and safety (2).

References

  1. European Medicines Agency. EMA and FDA have set common principles for AI in drug development. press release. January 14, 2026.
  2. F.D.A. Guiding principles for good practice for AI in drug development. Accessed January 14, 2026.
  3. F.D.A. Artificial intelligence for drug development. Guiding principles for good AI practice in drug development. Accessed January 14, 2026.
  4. American Society of Pharmaceutical Scientists. AAPS Community Digest for Wednesday, January 14, 2026. Email Newsletter.



Source link