On Parcoursup, does AI lead to ethical decisions?
Date:
Changed on 02/09/2025
Every year, Parcoursup subjects every baccalaureate holder heading for higher education (and their families) to weeks of doubt and even psychological stress. How should I allocate my preferences to obtain the best possible results? If I am accepted by this school, will I be offered a boarding place? Will I be admitted to my dream course if I'm only in 130th place on the waiting list? If I don't receive any offers on the first day, how long will it take to receive one? Ethical issues underlie all these questions.
Image
Verbatim
From an IT point of view, allocating places to 950,000 young people from a choice of 24,000 courses is definitely a fairly straightforward process. But guaranteeing equal opportunities is another matter altogether!
Auteur
Poste
Researcher in the joint Fairplay project team
To study these issues, researchers work with anonymized or aggregated data, ensuring the confidentiality of candidates' personal information.
Here is an example: with the predecessor of Parcoursup – Admission Post-Bac (APB) – the best strategy for obtaining a place on a highly sought-after degree course was to rank it as your first choice; this criterion weighed more heavily than having an excellent academic record. But only insiders knew about this technical “bonus” linked to the algorithm. “To avoid this type of pitfall, Parcoursup favours transparency: it must be possible to justify each allocation, and the source code of the allocation procedure is public,” points out Simon Mauras.
Within the Fairplay project team, the development of these “fair” algorithms is a central theme. The work of its researchers, which is theoretical in scope, is published in major scientific journals and at conferences, meaning it is available to everyone.
Created in 2022 by Criteo, ENSAE Paris engineering school (a member of the Institut Polytechnique de Paris) and Inria, at the CREST (Centre for Research in Economics and Statistics) laboratory, the team's mission is to study the interactions between game theory, machine learning and economics. The FAIRPLAY team studies economic systems in which different agents interact. Within the framework of these multi-agent models, the researchers pay particular attention to privacy, ethics and fairness.
Parcoursup has already exploited certain aspects of this work. For example Hugo Gimbert and Claire Mathieu – two researchers at the CNRS – have provided an algorithm that guarantees compliance with quotas for grant or scholarship holders. For example, if it is set at 10% and a school receives 100 applications, 10 of them will be from grant or scholarship holders.
Back in 2019, the same duo devised an “automatic response” system: if an applicant were offered two choices, Parcoursup would immediately validate the one they had ranked higher in their preferences. And the process was speeded up for those who had not yet been allocated a place.
How does the team go about developing these solutions? “At the start of a project, the technical challenge is to find a mathematical translation, which can be integrated into an algorithm, of the ethical properties to be verified, such as non-discrimination or the confidentiality of private data,” explains Simon Mauras.
When it comes to non-discrimination, researchers rely on the concept of “group fairness”. In other words, they divide the profiles analysed by an AI tool into homogeneous groups, such as men and women, managers and non-managers, city dwellers and country dwellers.
They then observe whether people from different groups are treated fairly. A similar approach, used in the United States in 2016, demonstrated the limitations of a software programme used by the police to assess the likelihood of offenders reoffending. It overestimated this risk for African Americans and underestimated it for white citizens.
The confidentiality of private data – another challenge for researchers – is also based on groups and comparisons. Let's imagine an AI tool that uses aggregated data from anonymised groups: the data from a complete group is compared with the data from the same private group of one of its members. The concept of “differential confidentiality” postulates that if the statistic changes significantly, it can be deduced that the algorithm “knows” too much about the person excluded.
Back to Parcoursup: open access to some of its source code has inspired researchers at Fairplay to carry out a series of studies. In 2021, Patrick Loiseau, the team leader, conducted a theoretical study to check whether places were assigned fairly to French and international secondary school students. This was followed by a study – still in progress today – of the data from Parcoursup.
We know that the academic career of French secondary school students is easy to assess, especially if they have been to renowned institutions. However, their international counterparts have attended foreign secondary schools, which are not classified and whose level is unknown. This poses a risk of unfairness.
Another of Fairplay's research topics addresses a sensitive issue: the allocation of boarding accommodation. This is granted on the basis of social criteria (parents' income, distance from home, etc.), while admission to university depends on academic performance. As a result, the two categories are sometimes contradictory!
To tackle this problem, Parcoursup adopted an algorithm designed by Hugo Gimbert and Claire Mathieu upon the creation of the platform in 2018. Denis Sokolov, a postdoctoral researcher in the team, recently proposed an improvement. The downside is greater complexity, leading to choices that are difficult to justify. The people in charge of Parcoursup have not yet decided whether to use it.
Fairplay is also developing solutions to help universities and schools, which are sovereign in their choices, to assess a posteriori whether they have selected the right applicants. This particularly applies to atypical profiles such as international secondary school students or baccalaureate holders who have obtained their diploma as independent candidates.
Another of the team's projects is to estimate how long it will take Parcoursup to offer a place to all 950,000 young people who have registered on the platform. The longer this delay, the greater the difference in treatment between those who know their allocation at the beginning of June and those who have to wait. Mathematical modelling has confirmed what we see in the field: the vast majority of secondary school students obtain a place on a course quickly, but the less fortunate ones face an excessively long wait (several months).
Beyond Parcoursup, the ethical AI promoted by Fairplay could benefit society as a whole. Any examples? Possible applications include more precise disease screening policies to avoid excluding any groups of people that might be affected; or even fairer – i.e. less biased – allocations of social housing.
Simon Mauras sees another avenue for research in the emerging field of living organ donation (20-25% of kidney transplants in Europe, but only 10-12% in France), in collaboration with Julien Combe, professor at École Polytechnique, researcher in economics at CREST, and collaborator at Fairplay. This does not necessarily involve AI, but rather algorithms whose design must incorporate principles of fairness and privacy. Simon explains: “An ethical AI tool would increase the opportunities to put patients in touch with each other, and would therefore partly remove the obstacle of incompatibilities between potential donors.” As we can see, the Fairplay team's tools can address ethical issues that go far beyond Parcoursup.