Vai menu di sezione

High-Level Expert Group on Artificial Intelligence - Ethics Guidelines for Trustworthy Artificial Intelligence
Anno 2019

The Ethics Guidelines for Trustworthy Artificial Intelligence were published by the European Commission’s High-Level Expert Group on Artificial Intelligence in April 2019 as part of the EU’s broader strategy to promote a human-centric approach to AI development and deployment. The guidelines aim to foster the development and use of AI systems that respect fundamental rights, democratic values, and the rule of law while supporting innovation and economic growth in the European Union.  

Rather than establishing binding obligations, the document provides an ethical framework intended to guide policymakers, developers, deployers, and organisations involved in the design and use of AI systems. It has played a foundational role in shaping the EU’s policy approach to artificial intelligence, influencing later initiatives including the EU AI Act and related governance mechanisms.

Concept of “Trustworthy AI” 

The guidelines introduce the concept of Trustworthy AI, which serves as the central normative objective of the framework. According to the document, an AI system can be considered trustworthy only if it meets three cumulative conditions throughout its life-cycle:

  1. Lawfulness: compliance with all applicable laws and regulations.
  2. Ethical alignment: adherence to ethical principles and values.
  3. Robustness: both technical and social reliability, ensuring resilience and preventing unintended harm. 

Key Requirements for Trustworthy AI 

Building on the ethical principles, the guidelines identify seven key requirements that AI systems should meet in order to be considered trustworthy. These requirements apply throughout the entire life-cycle of an AI system, including design, development, deployment, and monitoring.

Human agency and oversight 
AI systems should support human decision-making and preserve human autonomy. Appropriate oversight mechanisms—such as human-in-the-loop or human-on-the-loop approaches—should be implemented to ensure meaningful human control.

Technical robustness and safety 
Systems must be reliable, secure, and resilient. Mechanisms should exist to address errors or failures, ensuring accuracy, reproducibility, and safeguards against unintended harm.

Privacy and data governance 
AI systems should respect privacy and ensure appropriate data governance practices, including data quality, integrity, and access management.

Transparency 
The operation of AI systems should be explainable and traceable. Users and affected individuals should be able to understand how decisions are produced.

Diversity, non-discrimination and fairness 
AI systems should avoid bias and discrimination while promoting inclusiveness and equal treatment across different social groups.

Societal and environmental well-being 
AI systems should contribute positively to society and avoid negative impacts on social structures, democratic processes, or the environment.

Accountability 
Mechanisms should exist to ensure responsibility for AI outcomes, including auditability, risk management processes, and access to redress.

Role within the EU AI Governance Framework

Although the Ethics Guidelines themselves are non-binding, they have significantly influenced the EU’s regulatory trajectory on artificial intelligence. The concept of trustworthy AI introduced in the document became a central pillar of the European approach to AI governance and informed subsequent policy initiatives, including the EU’s legislative framework on artificial intelligence.

In this sense, the guidelines represent an early attempt to translate ethical principles into operational requirements, bridging the gap between abstract values and concrete governance mechanisms for AI systems.

The guidelines are available at the following link.

Mónica Cano Abadía, BBMRI
Pubblicato il: Lunedì, 08 Aprile 2019 - Ultima modifica: Mercoledì, 06 Maggio 2026
torna all'inizio