Sites Inria

Version française

Algorithms

SCM Grenoble - 25/06/2019

Algorithmic decision making : opportunities and risks for society

An interview with Claude Castelluccia and Daniel Le Métayer (Inria - PRIVATICS) on the report that they recently presented to the European Parliament (at the request of the STOA - Science and Technology Options Assessment) on the risks and opportunities involved with decision support algorithms.
The work carried out by our two researchers is also set against the backdrop of the recent launch of the 3IA Institutes in France, and the desire on the part of the French government to strengthen initiatives within the “AI for humanity” programme.

Does your study focus on decision support algorithms or automated decision making ? Could you give us a concrete example of a case where automated decision making is already in use ?

The question of the role humans play, or indeed don't play, in decision making is naturally a crucial one. In some cases, such as driverless trains on the metro, for example, or programs for placing orders on the stock markets, the decision is entirely automated. In other cases, such as search engines or platforms for booking accommodation, recommendations are made to the user, but ultimately the decision lies with them. However, the boundaries are not always so clearly defined. Let’s consider the hypothetical situation of a doctor with access to a reliable diagnostic support system, who chooses to follow all of the recommendations it makes. Strictly speaking, this is not automated decision making, given that the doctor has the final say over any decisions, but if they never question the results supplied by the algorithm, one could say that, in reality, it is the system that is making the decision.
As decision support algorithms become increasingly commonplace, to the point where many internet users are unaware of their existence (as is the case, perhaps most notably, for many users of Facebook and its news feed algorithm) these questions are now more and more critical. We feel it is important to stress the fact that all of these algorithms are not based on learning, even if this technique does raise more complex questions and has found itself more in the spotlight. The algorithms used in Parcoursup or Score Cœur (for matching transplant recipients with donors) raise interesting questions without actually falling into the bracket of learning.

In your opinion, why is it that we are turning towards these sorts of techniques ?

There are three main reasons for this. First and foremost, these applications are capable of performing certain tasks more efficiently than humans are. Driverless metro trains, for example, help make travelling safer and more fluid. Hopefully we will see the same benefits with driverless cars. These systems are also capable of processing large volumes of data, significantly more than humans are capable of doing (for analysing a significant number of images, for example, or a large quantity of past case-law). Generally speaking, they help cut costs while offering a better level of service, or even completely new services. There are a number of examples of this to be found in medicine: these systems can be used to improve decision making when it comes to care pathways (hospitalisation or treatment at home, for example, the possibility of further tests, etc.), enabling much earlier detection of symptoms of illnesses, analysis of the efficacy of treatment, etc.

So what are the risks ?

There are different types of risks. The ones we talk about the most, particularly for systems based on learning, are risks involving bias in the data used to drive these systems
In reality, the available data will always reflect any bias found in past behaviour. There have been some well-known examples of this, including the COMPAS system used to predict the risk of recidivism in a number of courts in the USA. It was found, for example, that COMPAS was far more likely to find against black defendants: the number of false positives (individuals wrongly identified as being at high risk of recidivism) was twice as high among the black population than it was among the white population.
There are also security risks, given that these systems can easily be hacked and their mechanisms can be bypassed (an example of these types of attacks with images can be found on p.35 of the report).
Lastly, there are also privacy risks, with these systems consuming large quantities of personal data. This data - which may be sensitive, such as health data - is used during operational phases, but also earlier on in the process, during the training phase.

In your opinion, what must be done to ensure that we are able to get the most out of these systems without having to deal with the negatives? Why is this proving so difficult ?

We came up with a number of recommendations in our report. The report is also accompanied by a second document entitled “Options Brief”, targeted more specifically at EU legislators.
We feel impact analysis to be the healthiest strategy to adopt in this regard. Just as the GDPR makes it a requirement for impact analyses to be carried out on privacy, the same must be enforced for “algorithmic” impact studies prior to the deployment of these systems, given the impact they can have on the privacy of the individuals in question. Following the publication of our report, the Canadian government adopted a directive for this very purpose. More generally, best practice declarations and other ethical charters, which are increasing in number in relation to AI, need to be made more concrete and verifiable. In the report, we place a great deal of emphasis on this notion of accountability, i.e. the need to be answerable.

From a research point of view, the main area of focus is explainability

How can we ensure that humans are able to understand the decisions that these systems take or recommend? This question becomes even more critical when it is the human who must take the decision and, ultimately, bear responsibility for that decision. We know, however, that for certain types of applications, the most accurate systems (in terms of how correct their forecasts are) are also the most opaque. This is true for deep neural networks, for example. How can we strike the right balance between accuracy and explainability? How can we formulate explanations in such a way as to ensure that they are actually useful? How can we measure this concept of usefulness? These sorts of questions will be the subject of a great deal of research in the years to come, and we feel it is important for this research to be undertaken in an interdisciplinary manner, not only with legal practitioners but also with psychologists, sociologists, philosophers, etc.
Many questions have also been raised from a regulatory perspective: under what circumstances must we demand explainability, for example? In what fields must we demand certification for decision support tools (as is the case, for example, with medical devices)?
Lastly, we need to launch a social debate on some of the ways in which these decision support tools are used: for facial recognition, for example, as is recommended by the CNIL, in military, legal or other contexts. The need to have such a debate is particularly pressing given the tensions that can develop between different principles or objectives - sometimes we need to come down in favour of one side or other. Tensions can also build up between the level of efficiency (or accuracy) of a system and its explainability, between people’s privacy and their safety, etc. It is also vital to ask these questions while considering all of the risks and benefits of either using or choosing not to use decision support systems.

Why did the European Parliament commission you to write this report ?

The reality is that politicians, whether at EU or national level, tend to come from the humanities or legal backgrounds: they are aware of the challenges, but often find themselves ill-equipped to deal with the complexity of technical questions. For this reason, they need expertise. They came to us, as have other institutions (the European Council, the National Assembly, etc.) because we have earned recognition through our scientific research but also because we have, for many years now, adopted an interdisciplinary approach. What this means is that we are capable of tackling legal issues, holding discussions with legislators on these subjects. In the age of technological societies, we feel that it is essential to further develop these transversal skills. Inria, in particular, has a key role to play, and must show a greater degree of willingness in this regard.

Keywords: Daniel Le Metayer INRIA Grenoble - Rhône-Alpes Privatics Claude Castelluccia

Top