Sites Inria

Version française

Award

19/11/2015

Jonathan Grizou wins the 2015 Le Monde Prize for university research

Launched in 1997, the Le Monde Prize for research is intended to highlight the early work of young French-speaking researchers who are likely to influence our scientific, economic, social, and/or artistic environment. Jonathan Grizou, a doctoral candidate who defended his PhD in October 2014, working in the Flowers project team (joint project with the University of Bordeaux and Enta Paris-Tech) is one of the winners of this 18th edition of the prize.

Following on from his doctoral work in the Flowers project team, Jonathan is now doing a postdoc in a chemistry group in Glasgow (the Cronin Group). A highly original group that uses robots and artificial intelligence to explore the possibility space of various chemical reactions.

The article that was nominated for the prize:

When Machines Learn to Read our Minds

by Jonathan Grizou

In his book The Diving Bell and the Butterfly , Jean-Dominique Bauby tells about the life he led before his brain injury and his experience of locked-in syndrome, which imprisoned him in a body that no longer responded to commands from his brain. We can now give people affected by serious handicaps the ability to communicate again thanks to brain-machine interfaces, which control a device via brain waves.

The neuroelectric activity of the cortex was first measured in 1875 by Richard Caton. A century later, it became possible to decode certain basic instructions (yes/no, up/down) from that activity. This considerably augmented patients' ability to communicate. Today, for example, it is possible to write autonomously on a computed through the power of thought alone.

Unfortunately, many technical obstacles need to be overcome before that level of autonomy is achieved. The most difficult challenge is to translate the patient's brain waves into instructions for the machine. The unique nature of human beings requires us to build a specific decoder for each patient, via a calibration phase. This involves collecting hundreds of examples of brain waves from the same patient. These are then interpreted by an expert to build a fully personalised decoder.

This process is long and laborious, requires the involvement of a specialist, and needs to be repeated at regular intervals, which explains why the development of brain-machine interfaces is taking a long time. An interface that could automatically adapt to each user, without external intervention, would considerably enhance the autonomy of patients. This challenge of self-calibration is what we tackled in the thesis, by studying its many implications and conducting experiments.

First, it is important to understand what we mean by self-calibration of an interactive system. It is equivalent to asking a machine to obey the orders of a person without prior knowledge of the meaning of the signals the person is sending to it. In the case of writing via thoughts, the computer must guess which letter the patient has chosen without the benefit of that person's brain wave decoder. It must therefore construct that decoder during the actual interactions.

Fortunately, it was possible to solve this problem thanks to certain constraints and invariants. We can rely on the logic of the patient who is trying to achieve a single goal using the machine (dictating the next letter). We also know that the patient will always use similar signals to indicate the same thought. So, using techniques from fields ranging from signal processing to statistical learning, we defined a measure of the consistency of a patient's brain waves over time and with regard to a specific goal. Then all we had to do was to generate hypotheses concerning the patient's intentions (each one being a letter of the alphabet) and measure the consistency with each of the hypotheses. The most consistent hypothesis (letter) is then chosen.

This means that a patient's commands can be revealed without ever directly knowing what his/her thoughts are. This feat is accomplished by considering the overall consistency of the patient's brain waves over time. But these ideas still needed to be confirmed experimentally, and this was done through an international partnership.

So we equipped a brain-machine interface with out self-calibration algorithm. Eight non-disabled subjects managed to control a computer with their thoughts, without going through the tedious calibration phase[1]. The machine adapted to the specific nature of each person all by itself, opening the door to greater autonomy for patients who need this type of interface.

All that remains is to make these attractive prospects, patient autonomy and improved living conditions, a reality. To to this, and for a perfect response to the expectations of the people concerned, it is essential to conduct a large-scale test phase with many patients under normal, everyday conditions of use. This phase is essential in order for this work on self-calibration algorithms, which is still in its research phase, to bear fruit.

But the story doesn't end there, because voices, gestures, and muscle impulses are not very different from brain waves. The self-calibration principles developed therefore also apply to them, giving the possibility of greater flexibility of interaction with the systems that surround us. Imagine a robot able to adapt by itself to the specific linguistic characteristics (accent, expression) of each user, or an intelligent prosthesis that understands its wearer's intentions without requiring calibration by a panel of experts. These challenges are important, because the modern world is largely run by automated systems, but we still too often find that humans are expected to adapt to the machines.

[1] Exploiting task constraints for self-calibrated brain-machine interface control using error-related potentials, I. Iturrate, J. Grizou, J. Omedes, P-Y. Oudeyer, M. Lopes and L. Montesano, PlosOne (accepted for publication), 2015. Code.

Top