Awards & Honours

Luis Galárraga makes artificial intelligence more understandable

Date:
Changed on 02/08/2022
By automatically comparing vast amounts of data, AI algorithms generate knowledge that enables them to make predictions and decisions. Their choices increasingly impact our daily lives. The problem is that, as the technology stands today, its decision-making lacks transparency. Humans are unaware of what the software is basing its decisions on. Everyone may therefore feel that they are victims of arbitrary decision-making. Which is why these black box algorithms need to be made more interpretable. This is precisely what Luis Galárraga is working on at the Inria Rennes-Bretagne Atlantique Centre. The French National Research Agency has just awarded him a Young Researcher Grant to study methods to bring humans back into the loop.
Luis Galárraga - extraction de règles d’associations dans des bases de connaissances
© Inria / Photo C. Morel

It’s the everyday tale of a couple applying for a mortgage for a home. The application is analysed by a software program. Up pops the response: it says “no”. Surprise. Disappointment. Puzzlement. What criteria did the algorithm use to reject their application? Age? Profession? Salary? Employer’s pedigree? Family background? Number of children? Nationality? Amount in savings account? It’s a mystery. Although artificial intelligence algorithms are very efficient, it’s difficult to understand what lies behind their decisions. Several Inria teams are currently working on this question of interpretability.

There are essentially two ways of trying to explain the logic used by AI”, says Luis Galárraga, a member of the Lacodam team. “The first is to produce a ranking-by-importance of the attributes that most influenced the response. I can now see that the factors most taken into account are age, then salary, and so on. The second way to explain how AI works is to use rules. For example, if the person is younger than a certain age and earns less than a certain amount, in general, the credit risk is high.” The first method is based on statistical analysis and the second on a logic tree.

The research project funded by the French National Research Agency is called FAbLe. It will run for four years. “The idea is to provide explanations for individual cases. For example, we take an AI model trained to approve or refuse a loan, and then we consider which method is the most appropriate to provide an explanation. Is it better to explain these individual cases using a ranking of attributes or is it better to explain them using a rule? That, in a nutshell, is the piece of the puzzle we want to provide. The computer will look at these cases and decide each time which method is the most appropriate.”

Generating a kind of translation

One of the complexities of the exercise will be to translate decision-making processes into something that’s easier for humans to understand. “Imagine using a neural network designed to detect objects in images. Dogs, for example. If I examine this AI model in detail, I see that it treats each pixel as input data. It then decides that it’s a dog because pixel number 9998 is this colour and pixel 9999 is like this or that. But for a human, this is totally incomprehensible! I cannot give that as an explanation.”

So what do we do?

We have to generate a kind of translation. I take the information used by the black box and convert it into a language that humans understand. In the case of an image, the algorithms build superpixels, which are continuous areas of that image. A superpixel made up of blue pixels in the upper part of the picture is usually sky. But how do we use this feature in the explanation? I can’t simply say that pixels one to one million are blue. I need to formulate the fact that this is sky. In other words, I need to give semantics to the explanations. That’s the idea. We may also want to indicate which of these superpixels carried the most weight in the algorithm’s decision. If an IA model has to categorise photos as outdoor or indoor, it will give predominance to the superpixel that appears to be sky.

Comparing methods

Trying to put oneself in the user’s shoes can sometime lead to disappointment. “Significant effort is often put in. Despite, this, people may still not understand how things have been formulated. This means that we have to start from scratch. And this takes time.” Hence the need for metrics. “It is the human who holds part of the answer, especially when it comes to comparing different methods. Which method will be the best for the user? The one that highlights, for example, 10 attributes? Or the one that provides a very complex logic tree with many rules? And if we use attributes, how many do we need? At how many does the user switch off? Or is the tree too complex? There’s a whole cognitive aspect to this that we’re going to need human metrics for.

To complicate matters further, these user preferences will vary depending on the audience and context. “With a diagnostic aid application, for example, doctors expect a sophisticated black box. To meet that expectation, we would need to go for a less simplistic explanation with a lot more information.” Conversely, in other contexts, users may want a more concise explanation.

Industrial grade software

The project will lead to a PhD project. “The French National Research Agency is also funding a year of engineering. This will enable us to implement the results in a tool that we're going to call FAbLe. The aim is not only to design a demonstrator to support our publications, but also to provide an industrial grade software solution that’s licence free, so that the entire AI community can then use it easily. In practice, we hope to include it as part of Scikit-Learn, a software library incorporating features widely used in machine learning. We also plan to port the FAbLe code to TensorFlow, the reference library in the field of neural networks.”

Luis Galárraga : from Guayaquil, Equator to Inria Rennes

Luis Galárraga

Luis Galárraga was born in Guayaquil (Ecuador) where he got a Bachelor in Computer Engineering at ESPOL. In 2009, he decided to continue his studies and migrated to Germany where he pursued a master in Computer Science at Saarland University. In 2012, he got an IMPRS scholarship at Max Planck Institute for Informatics as PhD student under the supervision of Fabian Suchanek. He worked on Rule Mining on RDF Knowledge Bases. After his PhD, he worked as a postdoctoral researcher at the Computer Science Department of Aalborg University. In October 2017, he was recuited by Inria as permanent researcher and joined the LACODAM team located at Inria Rennes.