Vai menu di sezione

AHEAD - Ethical and Sociological debate

This section collects materials about the ethical debate surrounding the use of AI in medicine and healthcare.

World Health Organization guidance on artificial intelligence for health
Anno 2025

  • The World Health Organization has developed a set of guidance documents addressing the ethical governance, regulation, and responsible use of artificial intelligence in healthcare. These publications form part of the organisation’s broader effort to support governments and health systems in managing the opportunities and risks associated with the growing use of AI in medicine, public health, and health system management.

Scientific paper - FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Anno 2025

  • The FUTURE-AI guidelines were developed by an international consortium of researchers, clinicians, and AI specialists to promote the development of trustworthy artificial intelligence systems for healthcare. The framework focuses particularly on AI technologies used in medical imaging and clinical decision support, although its principles are intended to apply broadly to AI-driven health technologies. 

    The initiative responds to growing concerns that many AI systems developed for healthcare perform well in experimental settings but fail to translate effectively into real-world clinical environments. The FUTURE-AI framework therefore proposes a set of guiding principles and practical recommendations designed to improve the reliability, robustness, and clinical usefulness of AI tools across healthcare systems.

Scientific paper - Embedded ethics: a proposal for integrating ethics into the development of medical AI
Anno 2022

  • The article Embedded ethics: a proposal for integrating ethics into the development of medical AI proposes a model for incorporating ethical analysis directly into the research and development process of artificial intelligence used in healthcare. The authors argue that existing AI ethics guidelines often remain too abstract and detached from the practical realities of system design and implementation.

UNESCO - Recommendation on the Ethics of Artificial Intelligence
Anno 2021

  • This Recommendation addresses ethical issues related to the domain of Artificial Intelligence to the extent that they are within UNESCO’s mandate. It approaches AI ethics as a  systematic  normative  reflection,  based  on  a  holistic,  comprehensive,  multicultural  and  evolving  framework  of interdependent values, principles and actions that can guide  societies  in  dealing  responsibly  with  the  known  and  unknown  impacts  of  AI  technologies  on  human  beings,  societies  and  the  environment  and  ecosystems,  and offers them a basis to accept or reject AI technologies. It  considers  ethics  as  a  dynamic  basis  for  the  normative  evaluation and guidance of AI technologies, referring to human  dignity,  well-being  and  the  prevention  of  harm  as a compass and as rooted in the ethics of science and technology.

Tool of evaluation - Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment
Anno 2020

  • The Assessment List for Trustworthy Artificial Intelligence (ALTAI) is a self-assessment tool developed by the High-Level Expert Group on Artificial Intelligence to help organisations evaluate whether their AI systems align with the European approach to trustworthy AI.

    ALTAI is intended primarily for developers, deployers, and organisations implementing AI systems.

    ALTAI is available as an interactive online tool and can be applied at different stages of development and deployment, allowing organisations to conduct internal evaluations and identify areas where improvements may be needed. The assessment does not produce a formal certification but provides feedback that organisations can use to improve their AI governance practices.

High-Level Expert Group on Artificial Intelligence - Ethics Guidelines for Trustworthy Artificial Intelligence
Anno 2019

  • The Ethics Guidelines for Trustworthy Artificial Intelligence were published by the European Commission’s High-Level Expert Group on Artificial Intelligence in April 2019 as part of the EU’s broader strategy to promote a human-centric approach to AI development and deployment. The guidelines aim to foster the development and use of AI systems that respect fundamental rights, democratic values, and the rule of law while supporting innovation and economic growth in the European Union.  

    Rather than establishing binding obligations, the document provides an ethical framework intended to guide policymakers, developers, deployers, and organisations involved in the design and use of AI systems. It has played a foundational role in shaping the EU’s policy approach to artificial intelligence, influencing later initiatives including the EU AI Act and related governance mechanisms.

Pubblicato il: Lunedì, 14 Aprile 2025 - Ultima modifica: Martedì, 12 Maggio 2026
torna all'inizio