Zum Inhalt springen
Home » FDA and EMA Release Joint Guiding Principles for Good AI Practice in Drug Development

FDA and EMA Release Joint Guiding Principles for Good AI Practice in Drug Development

On January 14, 2026, the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA) jointly published a set of 10 guiding principles for the responsible, effective, and trustworthy use of artificial intelligence (AI) across the entire drug product life cycle — from early discovery through post-marketing pharmacovigilance.

The announcement, distributed through official channels including the FDA China Office, marks the first formal collaborative framework between the two largest medicines regulators on AI good practice (GPAI). It reflects several months of intensive bilateral work and is intended to serve as a foundational reference for industry, academic researchers, international standards bodies, and other regulators worldwide.

The 10 Guiding Principles – High-Level Summary

While the full text is not reproduced in the announcement, the principles address the following core areas (based on the joint statement and typical content of such documents):

  1. Risk-based approach – Apply AI proportionate to the potential impact on patient safety, data integrity, and decision reliability.
  2. Transparency & explainability – Sponsors must provide sufficient information about model architecture, training data, performance metrics, and limitations so regulators and reviewers can understand how conclusions were reached.
  3. Data quality & relevance – Training, validation, and test datasets must be fit-for-purpose, representative of the target population, and free from relevant biases.
  4. Robustness & generalizability – Models should be validated across diverse populations, geographies, and real-world conditions to minimize performance drop in new settings.
  5. Human oversight – AI systems must remain under meaningful human control, especially for high-stakes decisions (e.g., dosing, eligibility, safety signals).
  6. Traceability & reproducibility – Every AI-derived insight used in regulatory submissions must be fully traceable back to source data, code, and processing steps.
  7. Change control & lifecycle management – Sponsors must have documented processes for monitoring, retraining, and re-validating models when new data become available or when the model is updated.
  8. Security, privacy & ethical use – Strong cybersecurity, data protection (GDPR/HIPAA alignment), and ethical safeguards against misuse or bias amplification.
  9. Performance monitoring in the real world – Post-approval surveillance of AI tools used in drug development or pharmacovigilance.
  10. International collaboration – Commitment to work toward greater alignment with other regulators (PMDA, MHRA, Health Canada, TGA, etc.) and standards organizations (ICH, ISO, IEEE).

These principles are deliberately high-level and technology-agnostic — they apply equally to classical machine learning, deep learning, large language models, generative AI, foundation models, and hybrid approaches.

Why This Release Matters – Practical Implications

  • First true transatlantic alignment on AI in medicines regulation
    Until now, FDA and EMA published separate (though similar) discussion papers and reflection documents. The joint principles signal that both agencies intend to converge on expectations, reducing duplicative work for global sponsors.
  • Strong push for transparency & independent validation
    The repeated emphasis on explainability, traceability, and real-world performance monitoring indicates that “black-box” models — especially large generative or multimodal foundation models — will face very high evidentiary hurdles in regulatory submissions.
  • Support for innovation with guardrails
    The document explicitly recognizes AI’s potential to:
    • accelerate discovery & reduce time-to-market
    • strengthen pharmacovigilance
    • improve human relevance of non-clinical models
    • decrease reliance on animal testing At the same time, it makes clear that innovation will not come at the expense of patient safety or data integrity.
  • Global ripple effect expected
    Other major regulators (PMDA Japan, Health Canada, MHRA UK, TGA Australia, NMPA China) are likely to reference or adopt large parts of the framework within 12–18 months, creating de-facto global good-practice standards for AI in drug development.

Timeline & Next Steps

  • The principles are non-binding but represent current regulatory thinking.
  • Both agencies state they will use the document to inform future guidance development, inspection expectations, and review practices.
  • Sponsors are encouraged to engage early with FDA (via INTERACT/pre-IND meetings) and EMA (scientific advice) when planning AI-heavy development programs.
  • First ICH reflection on AI is expected in 2027–2028, with Pax Silica partners (U.S., U.K., Japan, Korea, etc.) likely pushing for alignment with these principles.

Bottom Line (January 2026 Perspective)

The joint FDA–EMA Guiding Principles are not revolutionary in content — many points have appeared in earlier discussion papers — but they are revolutionary in form: the first time the two dominant regulators have spoken with one voice on AI in medicines.

For global drug developers, this is a clear signal:
? Transparency, traceability, and rigorous validation are non-negotiable.
? Early and proactive engagement with regulators is now even more important.
? The path to acceptance of AI-derived evidence is becoming more predictable — but also more demanding.

The era of “AI as a black box” in regulatory submissions is officially over. The next 24–36 months will show whether industry can meet the bar that FDA and EMA have now jointly set.

Schlagwörter: