Artificial intelligence

Building trustworthy AI in Europe

Date:
Changed on 17/07/2024
Since the European Commission's experts published ethical guidelines for trustworthy AI in 2019, a great deal of documentation on the subject has been issued and shared by all the players in the field. What is the current status of European initiatives to unite around this issue, and what impact are they having on the world of research? We take a look at the latest developments.
IA de confiance
©taa22

 

Over the past few years, a number of guides, recommendations and tools have been published in France on this subject, including the Confiance.ai programme's white paper[1], Numeum's practical guide to ethical AI[2] in 2021 and the Hub France IA's white paper[3] in 2023. All the recommendations are summarised in the following definition from the European Commission's experts: an artificial intelligence (AI) or artificial intelligence system (AIS) that can be trusted as lawful in accordance with European regulations, ethical in accordance with moral principles and values, and technically and socially robust.

By focusing on a core set of essential values such as safety, sustainable impact, autonomy, human responsibility, explainability, fairness and respect for privacy, they also offer a number of recommendations for the industrialisation of AI, such as the construction of AI components with controlled trust, the construction of data and/or knowledge to increase trust in learning, and trust-generating interaction between the user and the AI-based system.

AI trust: the need for shared values

While the IA Act adopted in March 2024 laid down the legal framework in Europe, since 2018 the guidelines have defined seven requirements to be met for an AIS to be considered trustworthy.

This basic foundation covers systemic, individual and societal aspects for both industry and civil society:

  1. Human action and control: respect for fundamental rights, human action and control.
  2. Technical robustness and security: resilience to attacks and security, contingency plans and general security, precision, reliability and reproducibility.  
  3. Privacy and data governance: respect for privacy, data quality and integrity, and access to data.
  4. Transparency: traceability, explicability and communication.
  5. Diversity, non-discrimination and equity: absence of discriminatory or unfair bias, accessibility and universal design, stakeholder participation.
  6. Societal and environmental well-being: link with sustainability and respect for the environment, social impact, society and democracy.
  7. Accountability: auditability, minimising and communicating negative impacts, arbitration and redress.

These seven non-exhaustive requirements are also shared by major bodies such as UNESCO, which has added an eighth requirement: human dignity.

 

IA Confiance
© Inria_Photo S. Erôme-Signatures

A multi-disciplinary advanced digital subject

"When we talk about trustworthy AI, we are talking about a multi-disciplinary subject, at the crossroads of human, social and technical issues, and a subject that involves other areas of advanced digital technology," explains Ikram Chraibi Kaadoud, Project Leader for Trusted AI and Ethical Management at the Inria Centre at the University of Bordeaux.

Trustworthy AI: a multidisciplinary subject

The guidelines presented above recommend both technical and non-technical methods for developing trustworthy AIS. Taking into account the cognitive complexity of human beings as users, and the risks borne by system users, are project management issues and not just technical ones.

The "performance-ecological cost" trade-off is also addressed, by recommending that current ecological realities be taken into account and that work be done for the common good rather than for a company's individual performance. For example, by opting for an AI algorithm that will perform less well (within the limits of what is acceptable for the business involved) and also consume less computational power, the impact on the planet will be less significant.

Another major issue, linked to the transparency requirement, is the performance-explicability trade-off (i.e. the ability to understand the reasons for an AIS's behaviour). The European Commission's experts recommend not using AI if transparency is not possible or compromised, as it will be more complex to understand whether AI is working for the right reasons (e.g. based on the right characteristics) or why it is not working. "The preservation of the human being, the citizen and society then remains the priority to be considered", sums up Ikram Chraibi Kaadoud.

Trust in AI from a human perspective

Frédéric Alexandre, Research Director at the Inria Centre at the University of Bordeaux and head of the Mnemosyne project team, and his doctoral student Baptiste Pesquet, are exploring the possibility of developing AIs that are intrinsically capable of assessing these levels of trust and using them in their decisions. To do this, they are basing their work on bio-inspired cognitive models and are also looking at metacognition capabilities, i.e. making cognitive procedures work not on the decision itself but on the selection of decision rules based on an estimate of the level of confidence.

In addition to the functional exploration and modelling of the mechanisms involved in these different estimates of the level of confidence, these scientists wish to study the practical usability of such approaches in operational decision-making systems to make explicit the criteria and arguments justifying the decision, thus moving towards explicability in decision-assistance systems, to guide humans in their work and tasks.

For more information

Trustworthy AI: a subject involving other areas of advanced digital technology

"When we talk about these issues, there is one point that needs to be deconstructed. It's a preconceived idea that says 'Trustworthy AI is a subject exclusively linked to AI'. It's not true", continues Ikram Chraibi Kaadoud. In fact, among the requirements for trustworthy AI, technical robustness and societal and environmental well-being raise issues of cybersecurity, data science and bias, user-centred design, trusted human-machine interaction, as well as digital sustainability.

"In other words, to achieve trustworthy AI, you need to get a number of experts to work together and also train your teams around these subjects, from the initial ideation stages of a project right through to its production launch, including, among other things, data collection and management, model design and training, and user testing, to ensure that the various requirements are met at each stage of designing trustworthy AI," continues the young project manager. 

Responsible, European generative AI for research

"The year 2023 was marked in particular by the rise of generative AI. At the same time as this technical boom, the subject of AI has become increasingly important in civil society: creative professions have found themselves faced with sudden, highly effective competition, teachers have been confronted with new student-GPT working pairs, and employers have had to decide whether or not to authorise this tool internally, etc." continues Ikram Chraibi Kaadoud.

While many efforts are being made in France and Europe to structure, inform and educate people about generative AI and the associated risks, the year 2024 has seen the emergence (or rather the officialisation) of a fear concerning the research professions. With the rapid expansion of the use of this technology in all science-related fields, AI has transformed research, making "scientific work faster, more efficient, accelerating discovery by offering more convenience in the production of text, images and code".

Against this backdrop, last March the European Union published Guidelines on the responsible use of generative AI in research[4] aimed at the public and private scientific community. These recommendations address the main opportunities and challenges: making researchers aware of the limits of the technology, particularly as regards plagiarism, the revelation of sensitive information or the biases inherent in the models. Based on the principles of research integrity, the European Union's recommendations offer guidance to scientists, research organisations and research funders to ensure a consistent approach across Europe, building on the Trustworthy AI Guidelines.

The European Guidelines[5] in 2019 and the AI Act in 2024 underline Europe's desire to federate, structure and reassure on these issues, taking into account the concerns of all stakeholders in society. While certain issues remain unresolved, such as the legislative framework and state control, it is now possible to think about, design and industrialise trusted AIS for the public and private players of today and tomorrow. 

Mnemosyne
© Inria_Photo M. Magnin