Vai menu di sezione

Scientific paper - FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Anno 2025

The FUTURE-AI guidelines were developed by an international consortium of researchers, clinicians, and AI specialists to promote the development of trustworthy artificial intelligence systems for healthcare. The framework focuses particularly on AI technologies used in medical imaging and clinical decision support, although its principles are intended to apply broadly to AI-driven health technologies. 

The initiative responds to growing concerns that many AI systems developed for healthcare perform well in experimental settings but fail to translate effectively into real-world clinical environments. The FUTURE-AI framework therefore proposes a set of guiding principles and practical recommendations designed to improve the reliability, robustness, and clinical usefulness of AI tools across healthcare systems.

Although the guidelines are not legally binding, they aim to support researchers, developers, regulators, and healthcare institutions in designing AI systems that are technically robust, ethically responsible, and capable of being deployed safely in clinical practice.

Key principles include:

Fairness 
AI systems should be designed to minimise bias and avoid discriminatory outcomes. Training datasets and evaluation procedures should represent diverse patient populations and clinical settings.

Universality 
AI tools should function reliably across different hospitals, regions, and patient groups. Systems must be validated beyond the environments in which they were originally developed to ensure broader applicability.

Traceability 
The development process, data sources, and model decisions should be documented and auditable. Traceability supports transparency, reproducibility, and accountability in the use of AI systems.

Usability 
AI technologies must be designed with clinical workflows in mind. Systems should be understandable and accessible to healthcare professionals and should support rather than disrupt medical decision-making. 

Robustness 
AI systems should remain reliable under varying conditions, including differences in data quality, imaging equipment, and clinical environments. Robustness is essential for safe clinical deployment.

Explainability 
Where possible, AI outputs should be interpretable and understandable to clinicians. Explainability helps ensure that AI-assisted decisions can be critically evaluated by medical professionals.

The guidelines highlight several practical requirements for implementing trustworthy AI in healthcare:

  • high-quality and representative training datasets 
  • transparent documentation of development processes 
  • rigorous external validation across multiple institutions 
  • continuous performance monitoring after deployment 
  • integration of AI systems into existing clinical workflows 

The framework also stresses the importance of interdisciplinary collaboration among clinicians, data scientists, ethicists, social scientists, engineers, and policymakers to ensure that AI tools meet both technical and clinical standards.

These guidelines provide a bridge between high-level ethical principles and technical best practices, helping to translate the concept of trustworthy AI into concrete methodological standards for healthcare AI development.

Author of the paper: Karim Lekadir, Alejandro F. Frangi, Antonio R. Porras, Ben Glocker, Celia Cintas, Curtis P. Langlotz, Eva Weicken et al.

Publisher or journal of publication: BMJ

The paper is fully available at the following link.

Mónica Cano Abadía, BBMRI
Pubblicato il: Mercoledì, 05 Febbraio 2025 - Ultima modifica: Mercoledì, 06 Maggio 2026
torna all'inizio