Digital education

The four pillars of research in AI for education

Date:
Changed on 16/01/2024
Artificial intelligence is revolutionising the way we design learning experiences and assess pupils. But what are they keys to the development and integration of this technology in education? We caught up with Jill-Jênn Vie, research fellow with the Soda project team at the Inria Saclay centre, to get some answers.
Piliers recherche éducation

 

Artificial intelligence is currently used in all sorts of ways at different levels within education. By providing users with a personalised learning experience, it has the capacity to improve how pupils learn, how teachers work and, more broadly, what the future of education might look like. But if artificial intelligence is to play a central role in the educational experience of the future then there are four main challenges - all closely linked - which stakeholders in the field must focus on.

The first concerns fairness, confidentiality and transparency in decision-making. AI algorithms learn based on data. If this data is biased in favour of one particular ethnicity, gender or socioeconomic group then decisions - on things such as admissions, for example - may be as well. Care must therefore be taken to ensure that AI algorithms do not amplify existing bias, but rather help to reduce inequality.

Many stakeholders have sought to adhere to the French data protection authority’s principle of data minimisation, and to exclude the gender variable in their AI systems.

At the same time, we need to be able to measure discrimination if we are to tackle inequality. And if an algorithm, or a decision, is not open, then it will be harder to tell whether or not it’s creating discrimination.

Education, a key research issue at Inria

The second challenge is linked to identifying useful metrics for both teachers and pupils, what is known as learning analytics, e.g. measuring pupils’ learning gains. One aspect of this involves summarising and visualising extremely large datasets. The goal here is to summarise the objective functions that will then be optimised, using machine learning, and to provide learners with feedback so that they know where they are within the learning space and what progress they are making.

The third challenge, linked to the previous one, concerns predicting pupils’ performance, in order, for example, to identify students who might be struggling and ways in which teaching might be tailored to meet their needs.

As they say at the French Department of Education's Scientific Council, we are really good at saying that “the level is dropping”, but what we want to know is, “what can we do to address that?”. 

The goal is to be able to act pre-emptively, optimising sequences of exercises presented to pupils (if questions are too easy then the learner will get bored; if they are too hard then the learner will lose motivation) while taking care not to make things worse. This draws on the use of causal inference and reinforcement learning.

The fourth challenge relates to the development of automated content generation: from writing and exercises to corrections. We’ve heard a great deal about ChatGPT recently. There is a lot of concern within education about how learners might use this to cheat, but it presents a great opportunity to generate fun, innovative exercises that can be tailored to pupils’ needs; or for learning a language through interaction with a large language model (LLM) which adapts to the level students are at and provides feedback on any mistakes they might make. 

We often focus on the risks, without considering the potential benefits of AI in education and training.

Cover Annual Report 2022

Read or re-read this article in our 2022 annual report

Digital health, disability, environment, education, energy, reliability of our digital tools... Inria's annual report opens a window on the involvement of digital sciences in some of the most pressing issues facing our society! It's also a time for dialogue between scientists, partners and Inria staff.