Vai menu di sezione

WHO - Regulatory considerations on artificial intelligence for health
Anno 2023

The WHO has published key recommendations that developers, manufacturers, policymakers, regulators, legislative bodies, and healthcare professionals should follow and adopt in the production and application of artificial intelligence (AI) systems within the healthcare sector.

The document, drafted by the Working Group on Regulatory Considerations on AI for Health, aims to provide a comprehensive overview of regulatory initiatives that should be followed and implemented—including best practices—with respect to the use of AI in the medical field to enhance the protection of human health.

Specifically, the document outlines the following areas and actions for intervention:

  1. Documentation and transparency: It is recommended that the intended medical purpose of the AI system and its development process be clearly defined and documented in advance. This includes providing information on dataset selection and usage, reference standards, system parameters and metrics, as well as any deviations or updates from the original design that may occur during development, in order to ensure appropriate traceability of all development stages. A risk-based approach is also advised for determining the level of documentation required for the development and validation of AI systems and for document retention.
  2. Risk management approaches and lifecycle governance of AI systems: A regulatory approach covering the full lifecycle of the AI system is recommended, including pre-market development, post-market surveillance, and management of necessary modifications. It is also essential to promote a risk management approach that addresses AI-specific risks, such as cybersecurity threats and vulnerabilities, underfitting risks, and algorithmic bias.
  3. Intended use and analytical and clinical validation: Transparent documentation of the intended use of the AI system must be provided. All details about the dataset used to train the AI—such as size, context, population, input and output data, and demographic composition—should be documented and shared transparently with users. External validation mechanisms based on independent datasets that are representative of the target population and context are encouraged, along with clear documentation of the external dataset and performance metrics. Regarding risk-based clinical validation, the document emphasizes that randomized clinical trials—considered the gold standard for clinical performance evaluation—may be suitable for high-risk systems or those requiring a higher standard of scientific evidence. In other cases, prospective validation may be recommended when implementing real-world studies. Finally, the document recommends an intensified monitoring period after the deployment of AI systems through post-market surveillance.
  4. Data quality: Developers are urged to assess whether the available data are of sufficient quality to support AI development for its intended purpose, including implementing rigorous evaluations to ensure that the system does not amplify bias or errors. Careful design and problem-resolution systems can help identify data quality issues and prevent or mitigate potential harm. Stakeholders should also consider strategies to mitigate data quality problems and the associated risks of using health data, and foster data ecosystems to enable the sharing of high-quality data.
  5. Privacy and personal data protection: Privacy and data protection should be taken into account during the design and implementation of AI systems, in full compliance with applicable legal frameworks. The document emphasizes the importance of implementing a compliance program that addresses foreseeable risks and ensures that privacy protection measures reflect potential harms and the specific application environment.
  6. Engagement and collaboration: In developing future roadmaps for AI innovation and deployment, the document recommends the importance of accessible and informative platforms to facilitate engagement and collaboration among key stakeholders. This would help streamline the regulatory process and accelerate the development of AI innovations that will transform medical practice.

Lastly, the document encourages stakeholders, regulatory bodies, and manufacturers to continue engaging in dialogue and information-sharing. It recommends that national and international groups—such as the International Medical Device Regulators Forum and the International Coalition of Medicines Regulatory Authorities—continue working toward the convergence and harmonization of relevant AI regulations.

The full document is available at the following link and in the download box.

Marta Fasan
Pubblicato il: Giovedì, 19 Ottobre 2023 - Ultima modifica: Martedì, 01 Luglio 2025
torna all'inizio