Focus under the hood of artificial intelligence with HyAIAIA project

Date:
Changed on 09/07/2020
Machine learning is the most widely known sub-domain of Artificial Intelligence (AI). It involves using algorithms to automatically learn knowledge from data. This learned knowledge, known as a “model”, can then be used to make decisions that influence our lives. However, the decisions made by the learned models are not easy for humans to understand. How then can we make these models and the decisions associated with them more transparent? This is the question that six Inria teams are trying to answer as part of a joint project called HyAIAI, coordinated in Rennes by researcher Élisa Fromont.
Comment naviguer efficacement dans des structures de graphes (au premier plan) pour en extraire de l'information
© Inria / Photo S. Erôme - Signatures

Compas is an acronym for Correctional Offender Management Profiling for Alternative Sanctions. In a nutshell, this is a software program used to assess every inmate eligible for parole and advise the US courts of the likelihood of their reoffending. Suffice it to say that the algorithm’s opinion will significantly influence the judge’s decision and the prisoner’s life. The problem lies in how much faith should be placed in this tool. Will it draw on social prejudices? Will it judge based on skin colour, for example? To ensure that this does not happen, you need to be able to understand the algorithmic meanderings that form the basis of its decision. But here we find one drawback of even the most efficient machine learning algorithms: it is often difficult for the layperson to understand the basis on which they make their decisions.

Hence the need to introduce interpretability into these models. This is the objective of HyAIAI, a project involving six Inria teams specialising in this field: Lacodam (Rennes), Magnet (Lille), Multispeech (Nancy), Orpailleur (Nancy), SequeL (Lille) and TAU (Saclay).

Mapping

Learned models are mathematical functions that map an input to an output”, explains Élisa Fromont. “For example, a model could forecast tomorrow’s weather based on what sensors are reporting today. The algorithm learns the model (the function) using correlations between the weather from previous days and the sensor measurements at that time. The learned model is then used to provide a forecast based on new sensor measurements.

The most widely known type of machine learning is supervised learning:

To learn, the algorithm needs examples of both the inputs (e.g. sensor measurements) and the expected outputs for these inputs (e.g. known weather for the next day). By feeding it with this data, it will learn what maps to what. This mapping function can vary in terms of complexity and interpretability. It might be a function composition with millions of parameters, as in the case of deep neural networks, for example.

Decision trees

Deep neural network learning is not the only form of (supervised) machine learning. Other well-known algorithms can be used to learn ‘decision trees’, for example.

A decision tree works a bit like the classic kids’ game ‘Who is it?’ where you have to guess the name of a person based on a series of questions. Is it a man or a woman? Answer: A man (all women can now be removed from the list of possible solutions). Next question: Does he have a moustache? With each question/answer iteration, the number of possible correct answers decreases until you’re left with only one solution. These trees are much easier to interpret. To find out why a decision was made, simply follow a branch of the tree to understand what was used to make the decision. But in the case of neural networks, if I have a model that tells me it made a particular decision because neuron 8433 connected to neuron 8326 with a weight of 36 was activated in particular, the explanation makes no sense.

Drowning from eating ice cream

In the HyAIAI project, one of the approaches considered (by the Lacodam and Sequel teams in particular) aims to “understand the decision taken based on a particular piece of data presented to the model. Let’s take Compas as an example. When the software assigns a score to a particular prisoner, we want to know which of the prisoner’s characteristics (which data ‘attributes’ as they are known) led to this decision.

Behind the question lies a classic machine learning problem: “The algorithm is based primarily on correlations.” There is a risk of things going awry. “For example, it might want to correlate a prisoner’s likelihood of reoffending with his skin colour, whereas in reality the real causes are to be found in his entire social background: living in a deprived area, being subjected to more police checks and so on. Correlations do, of course, exist, but they are due to external factors. Statistically, people who have just eaten ice cream have a higher probability of dying from drowning on the same day, and an algorithm would consider that a very good attribute to describe someone who is likely to drown. In reality, it’s not because they’ve eaten an ice cream, but simply because they’re more likely to be at the seaside in summer (the ice cream season!), and when you’re near the sea, you’re more likely to drown than when you’re in the city. Our colleagues on the TAU team are working to figure out these causal mechanisms.

Debugging

Looking for an explanation also includes an inspection and debugging aspect. This is one of the areas being explored by the Lacodam team. “We open the black box to try to understand and possibly invalidate the decision. Neural networks are known to be particularly ‘boastful’ in that they are always very sure of their decision. And sometimes they’re very sure of something that’s actually very wrong! We study these networks and their internal activation patterns so that we can then try to predict when these networks are going to get it wrong.

Injecting knowledge

Another of the researchers’ objectives is “to be able to inject knowledge beforehand to guide or constrain the decision. For example, a weather algorithm used to forecast cloud cover during the day could use a physical model of cloud movement to help it in its forecasting. Another example from chemistry is that if you want to predict the configuration of certain molecules, adding specialist information on the possible connections that atoms can make between each other may be necessary. Some algorithms are quite good at adding these types of constraints. This is particularly the case for algorithms running on traditional logic-based systems. However, it is not quite so simple on a neural network. In Nancy, the Orpailleur team works on these topics.

In all this work, one of the difficulties stems from the fact that “interpretability is an ill-defined concept. No measurable metric yet exists to determine the degree to which something is interpretable. Is what seems interpretable to someone a good explanation from a general point of view? Take for example the automatic detection of cardiac arrhythmias. A model detects them by observing patterns in the time signal. Okay, but is that a good explanation for making a decision? Is it the decision the experts want?  In this case, we could say yes, because doctors, too, are looking for these small characteristic signals in the data. But does this work for all types of data? In the case of a faulty electrical installation, based on the sequence of patterns, can we say that this little bit here and that little bit there provide a good explanation? Having a generic explanation process that works for everything and everyone is not that simple.

And it’s not just a technical problem. There’s a human factor in the loop, too. We therefore need to work with researchers in the social sciences to study how the user perceives our explanation. And for those of us in the hard sciences, we’re in uncharted waters.

Closer to Scikit-Explain software

The HyAIAI research project funds six positions including PhD students and postdoc researchers. An engineer has also been recruited.  “In the machine learning field, there is a widely used open source library called Scikit-Learn. In our team, Lacodam, we currently have an engineer working on a ‘Scikit-Mine’, which will be our own version to perform, not machine learning, but data mining in a more general sense, including pattern mining. In the same vein, we are now considering a ‘Scikit-Explain’ version that focuses entirely on interpretability. It would allow us to implement not only our own techniques, but also others proposed by the research community.

It is also worth noting that scientists at the University of Bristol have been working on a prototype along the same lines, called FAT-Forensics (Fairness Accountability and Transparency). “In their case, they no longer have funding for this software. We may therefore be able to take this over.

HyAIAI is the acronym for Hybrid Approaches for Interpretable AI.

This project is part of the Inria Challenges, large-scale projects that aim to bring together several teams on a major research theme. Launched in September 2019, HyAIAI will run until September 2022.