Vai menu di sezione

Tool of evaluation - Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment
Anno 2020

The Assessment List for Trustworthy Artificial Intelligence (ALTAI) is a self-assessment tool developed by the High-Level Expert Group on Artificial Intelligence to help organisations evaluate whether their AI systems align with the European approach to trustworthy AI.

ALTAI is intended primarily for developers, deployers, and organisations implementing AI systems.

ALTAI is available as an interactive online tool and can be applied at different stages of development and deployment, allowing organisations to conduct internal evaluations and identify areas where improvements may be needed. The assessment does not produce a formal certification but provides feedback that organisations can use to improve their AI governance practices.

The ALTAI framework is structured around seven key requirements derived from the European Union’s concept of trustworthy AI. These requirements are translated into a series of questions and evaluation criteria that organisations can use to assess their systems.

The main areas addressed in the assessment include:

Human agency and oversight
AI systems should support human decision-making and allow meaningful human supervision. Organisations should ensure that appropriate oversight mechanisms are in place and that users understand how the system operates.

Technical robustness and safety
Systems should be reliable, secure, and resilient. The assessment considers issues such as error management, system accuracy, cybersecurity safeguards, and the ability to prevent unintended harm.

Privacy and data governance
Organisations should ensure that personal data used in AI systems is collected, processed, and stored in accordance with applicable data protection standards, while maintaining high standards of data quality and integrity.

Transparency
AI systems should provide appropriate levels of explainability and traceability. Users and affected individuals should be able to understand how decisions or recommendations are produced.

Diversity, non-discrimination and fairness
Organisations should evaluate whether their systems may produce biased outcomes and implement safeguards to mitigate discriminatory effects.

Societal and environmental well-being
AI technologies should contribute positively to society and avoid harmful effects on democratic values, social structures, or environmental sustainability.

Accountability
Organisations should establish governance structures that allow responsibility for AI systems to be clearly assigned. Mechanisms such as auditing, risk management, and access to remedies are considered in the assessment.

ALTAI is one of the earliest practical instruments developed within the European Union to operationalise ethical principles for artificial intelligence. By translating normative principles into concrete evaluation criteria, the tool supports organisations in implementing the European vision of trustworthy AI in real-world technological systems.

Although developed prior to the adoption of the EU’s binding AI legislation, the framework continues to serve as a useful reference for organisations seeking to align their AI development practices with emerging European regulatory and ethical standards.

Developer of the tool of evaluation: High-Level Expert Group on Artificial Intelligence (European Commission).

The tool of evaluation is available at the following link.

Mónica Cano Abadía, BBMRI
Pubblicato il: Venerdì, 17 Luglio 2020 - Ultima modifica: Mercoledì, 06 Maggio 2026
torna all'inizio