Ethics

AI decentralized: how to ensure more fairness and privacy?

Date:
Changed on 29/11/2023
We entrust our health, finances and face recognition to AI...does artificial intelligence always merit such trust? This is just one of the questions the Magnet team from the Inria centre at the University of Lille is exploring. In particular, the team studies how machine learning could become more fair and more respectful of privacy in a decentralised context.
Illustration de maillage
© Michael Dziedzic /Unsplash

Fairness and respect of privacy, two key concerns

Among the various fields of AI, machine learning consists in training a model to respond to a problem by introducing a multitude of examples representing the task to be accomplished. But what happens when this model proves less effective for one group of people than another? It is no longer fair. ‘Let’s imagine a medical application where the goal is to detect suspicious moles’, Michaël Perrot, researcher with the Magnet team, begins. ‘Data changes according to skin tone, and that can raise problems of fairness. If certain moles are identified less effectively in a group, these patients won’t go to see a dermatologist and risk developing a serious illness which could have been avoided.’ This type of bias could notably emerge when the vast volumes of data on which these models are trained are not representative of the population as a whole.

In the medical example above, the training data can be collected from hospital consultations. However, each hospital has only a partial view of the problem via its local population. It will be difficult therefore to obtain a fair model. One solution consists in using decentralised learning to cross-reference data sources and thus enrich them.
How does it work? Several entities communicate together, with a cooperative aim and without sharing potentially sensitive data or storing this information in a single location managed by a third party. Data sovereignty is an important factor here, yet it does not guarantee an adequate level of privacy for the individuals present in the learning databases. ‘Even if the data is not exposed directly, the models it trains can be used to find sensitive information’, Michaël Perrot explains. ‘We thus need to develop specific learning mechanisms which make this impossible.’

Developing more fair models

To address this issue, Michaël Perrot launched a project entitled ‘Fairness in privacy preserving decentralized learning’, backed by the STaRS Grant which supports scientific research talents based in the Hauts-de-France (Northern France) region.

The aim of this project is to design new learning algorithms that respect privacy and learn models that do not discriminate against certain groups of individuals, in a decentralised manner, he tells us.


The first step of the project is to develop an algorithm to learn fair models that is as simple as possible, to which additional restrictions can be added, i.e. privacy and decentralisation. ‘We thus created an open-source method called FairGrad (Fairness Aware Gradient Descent)’, says Michaël Perrot. ‘It enables training of fair models in a simple way and is compatible with “Pytorch”, one of the standard machine learning libraries. It’s based on the principle of example reweighting. The idea is to increase the importance of disadvantaged individuals while reducing the impact of advantaged subjects. Our current objective is to combine this FairGrad method with the DecLearn software for decentralised learning, developed by our Magnet team.’ 

Studying the interplay between fairness, privacy and decentralised learning

In addition to algorithmic development, one specificity of the project is to explore the issue from a theoretical perspective to answer several questions that arise when imposing fairness constraints. How does fairness interact with decentralised learning? How does it interfere with privacy preserving machine learning? And lastly, how do the three concepts interact together? While the fields of fairness, privacy and decentralised learning have been widely explored individually, their interactions have received less attention in the current scientific literature.

Michaël Perrot thus looked into the impact on fairness of privacy constraints. With his co-authors, he demonstrated that this impact is in fact very limited when the data set used to train the model is sufficiently large. ‘The variations in terms of fairness are Bounded by the distance between the model respecting privacy and the model which doesn’t’, he explains. ‘The closer they are, the  smaller the variations, and the impact of privacy constraints on fairness is more limited. There are several methods to train private models close to non-private models for sufficiently large data sets. Intuitively, the larger the subject population, the harder it becomes to identify one individual with certainty, which is commonly known as blending into the crowd.’ 

Human-centred AI, for the good of all

Beyond this particular research initiative, the question of AI and fairness is also a matter of concern for international institutions. Europe recently took up the subject. The Artificial Intelligence Act (AI Act), the European Commission’s AI regulation project, mentions fundamental rights in terms of data protection, human dignity and non-discrimination. ‘This is why fundamental research in AI is crucial’, Michaël Perrot concludes. ‘It enables us to better understand why biases can emerge in learning models, to study ways to avoid them and, above all, progress further in the field of human-centred AI. ‘It enables us to better understand why biases can emerge in learning models, to study ways to avoid them and, above all, progress further in the field of human-centred AI.