Artificial intelligence

Building trust in AI through a better understanding of algorithms

Date:
Changed on 18/07/2022
Artificial intelligence has continued to develop over the past decades, becoming increasingly prominent in our daily lives and increasing the influence it has over human decisions. Although its effectiveness is without question in many sectors, AI remains complex and mysterious, something that is misunderstood by the majority of users. The aim of the EU project TRUST-AI is to develop solutions that will make AI more accessible and easier to understand.
Illustration d'un joueur d'échec
Jan Vašek from Pixabay

Artificial intelligence and interpretability: a key challenge

Artificial intelligence models, often referred to as “black boxes”, are known for their effectiveness and the highly relevant results they deliver. But a lack of transparency in terms of how these models work means many users nowadays view them with a degree of scepticism. There is an ethical dimension to the interpretability of algorithms, as the influence of artificial intelligence over our day-to-day decisions continues to grow. When influencing a medical decision, for example, both the healthcare professionals and the patients concerned must be able to understand the sequence of algorithms which influenced the decision. Interpretability is crucial when it comes to building trust in artificial intelligence among users.

It was with this in mind that the TRUST-AI project was set up, involving a number of different research institutes and private companies. Coordinated by Gonçalo Figueira from the Institute for Systems and Computer Engineering, Technology and Science in Portugal, the participants in the project include members of the Tau project team from the Inria Saclay-Île-de-France research centre. Other participants include members of the Dutch Research Council, the University of Tartu in Estonia and 3 companies: APINTECH from Cyprus, LTPlabs from Portugal and Tazi from Turkey. Officially launched in October 2020, this project, which is being funded by the European Innovation Council (EIC) as part of the Horizon Europe programme, has been allocated a budget of €4 million over 4 years.

Putting human intelligence at the heart of the process

The aim of TRUST-AI is to develop an artificial intelligence platform, TRUST (Transparent Reliable and Unbiased Smart Tool), which will be both reliable and participatory, helping to make artificial intelligence more accessible and more responsible. What sets it apart is the way in which it involves human intelligence in the discovery process. “TRUST-AI is designed to get learning algorithms to dialogue with users in order to guide the creation of models, introducing human factors into the learning loop at the earliest possible stage ” explains Marc Schœnauer, director of research for the Tau project team.

Within the framework of this project, the approach the researchers have opted for combines a number of different methods: user interface improvement, cognitive science and genetic programming, which is one of the themes covered by Marc Schoenauer's research. Inspired by natural selection, genetic programming can be used to track each choice made by an algorithm. It is less effective, however, when it comes to deep learning, which involves large data sets. Used in conjunction with each other, the two approaches are complementary: genetic programming delivers explainability, while deep neural networks deliver performance.

The platform will also draw on previous research carried out by the researchers involved in the project on subjects such as symbolic artificial intelligence and human-guided machine learning. This research has led to the discovery of explainable solutions for problems in both real-life and academic settings. “Our aim now is to develop these ideas using cognitive models, enhanced human interaction and improved learning algorithms, and to explore them in different use cases”, explains Gonçalo Figueira.

Useful and generalisable solutions

This approach will result in artificial intelligence solutions that are easier to understand and generalisable, particularly in situations where human responsibility is essential. The solutions that are developed will be particularly useful for medical diagnoses involving the use of artificial intelligence. Making sure the process is interpretable is vital in cases where algorithms influence the diagnosis of rare tumours, for example, or are used to determine the opportune time to operate.

In an altogether different context, these solutions could also be used to manage stocks of fresh products, helping with delivery optimisation and anticipating any possible delays. Finally, in the energy sector, being able to predict energy use in order to optimise the running of power plants will have an impact on consumers through price modelling. Citizens will have a legitimate right to know how companies determine the tariffs that are applied to them.

The solutions developed as part of the project could also find a home in a wide range of other sectors such as banking, insurance and the civil service.