Vai menu di sezione

World Health Organization guidance on artificial intelligence for health
Anno 2025

The World Health Organization has developed a set of guidance documents addressing the ethical governance, regulation, and responsible use of artificial intelligence in healthcare. These publications form part of the organisation’s broader effort to support governments and health systems in managing the opportunities and risks associated with the growing use of AI in medicine, public health, and health system management.

Three key WHO publications structure the organisation’s current policy approach to artificial intelligence for health: 

Core principles 

Across its guidance, the World Health Organization identifies a set of ethical principles that should guide the design, deployment, and governance of AI systems in healthcare. These principles are intended to ensure that AI strengthens health systems while protecting patients, healthcare professionals, and the public. 

Key principles include: 

Protection of human autonomy 
AI systems should respect patients’ rights and dignity and should not undermine the ability of individuals to make informed decisions about their health. Healthcare professionals must retain meaningful oversight over AI-supported clinical decisions.

Promotion of human well-being and safety 
AI technologies should contribute to improved health outcomes and must be rigorously tested to ensure they are safe, effective, and reliable in real clinical environments.

Transparency and explainability 
The functioning and outputs of AI systems should be understandable to healthcare professionals and, where relevant, to patients. Transparency supports trust, accountability, and appropriate clinical use.

Responsibility and accountability 
Clear lines of responsibility should exist for the development, deployment, and outcomes of AI systems. Mechanisms should allow for oversight, auditing, and access to remedies when harm occurs.

Equity and inclusiveness
AI systems should contribute to reducing inequalities in healthcare rather than reinforcing them. Particular attention should be given to ensuring that datasets, system design, and deployment strategies reflect diverse populations and healthcare contexts.

Responsiveness and sustainability
AI technologies should be continuously monitored and adapted to evolving clinical needs, healthcare infrastructure, and public health priorities. Long-term sustainability and system resilience are key considerations.

The WHO highlights the need for regulatory frameworks capable of addressing the distinctive characteristics of AI technologies. In particular, regulators are encouraged to adopt lifecycle-based oversight covering the development, validation, deployment, and monitoring of AI systems used in healthcare.

The guidances also stress the importance of data governance, clinical validation, and post-deployment monitoring, especially for machine learning systems that may evolve over time. Governments are encouraged to strengthen regulatory capacity and promote international cooperation to ensure consistent safety standards for AI-enabled health technologies.

The 2025 guidance addressing generative AI further highlights emerging concerns such as the reliability of AI-generated medical information, risks of misinformation, and the need to ensure appropriate human oversight when such systems are used in health contexts.

These WHO guidance documents represent one of the most comprehensive international efforts to articulate ethical and governance standards for artificial intelligence in healthcare. They provide policymakers and regulators with a common reference framework aimed at aligning technological innovation with public health goals, patient safety, and respect for human rights.

Mónica Cano Abadía, BBMRI
Pubblicato il: Martedì, 25 Marzo 2025 - Ultima modifica: Martedì, 12 Maggio 2026
torna all'inizio