
School bullying: an issue school pupils want to see addressed
The researchers from the project team Petscraft got the idea to work on the issue of school bullying from those most affected by it: the pupils themselves. “I work with secondary school pupils as part of the national programme “A scientist in every classroom”, explains Nicolas Anciaux, director of research at the Inria Saclay centre. “Supported by Inria and the French government, the aim of this initiative is to ensure that every fifth year class in France gets a visit from a researcher specialising in digital technology. During a presentation I did on privacy back in 2023, I asked those pupils for some practical subjects they would like to see us address. A number of them suggested the idea of being able to make the process of reporting school bullying truly anonymous.”
This was an issue that Cédric Eichler, an associate professor at the Val de Loire Insa Centre and one of the co-founders of Petscraft, was already familiar with.
Verbatim
I preside over disciplinary committees at the Insa. When it comes to bullying, not only have I seen students refuse to come forward out of fear of being identified, but I've also seen others coming forward with no thought as to the risks of being recognised by the accused.
Associate professor at the Val de Loire Insa Centre
Close cooperation between researchers from Inria and the Insa
Addressing school bullying will be one of the first research topics for this young team, which was officially launched in 2024 on the back of a partnership between two researchers. Nicolas Anciaux and Benjamin Nguyen, a professor at the Val de Loire Insa Centre in Bourges, had spent years working together on the question of privacy.
Keen to work together more closely, they created Petscraft alongside colleagues from the Insa. “Bourges is a key location for military cybersecurity and privacy is a part of cybersecurity”, explains Nicolas Anciaux. “So it was only logical for us to come together.” Their goal? To tackle new subjects linked to privacy that have yet to be explored.
The Petscraft team
Officially launched in June 2024, Petscraft is headed up by Benjamin Nguyen. It has five founding members, one of whom is Inria’s Nicolas Anciaux, and four associate professors from the Val de Loire Insa Centre (Adrien Boiret, Xavier Bultel, Cédric Eichler and Benjamin Nguyen), in addition to two external collaborators (Iulian Sandu Popa from the University of Versailles and José Maria de Fuentes from the University of Madrid). There are also around a dozen or so PhD students and postdoctoral researchers with Inria and the Insa, plus Loïc Besnier, head of science outreach and doctor of human sciences.
The name Petscraft comes from the fact that the team is looking to develop PETs, Privacy-Enhancing Technologies, which it wants to “craft”, i.e. design, analyse, implement, deploy and test.
Petscraft will focus on four main areas of research with regard to PETs:
- Explainable models
- Decision support
- Secure protocols
- Managing confidential data
Can AI correctly guess the identity of anonymous contributors?
School bullying is one such subject. Researchers from the Inria Saclay Centre and the Val de Loire Insa Centre first started working on this issue in late 2023, the aim being to find a way of using a chatbot to protect the identities of people reporting cases of bullying. Nicolas Anciaux and Cédric Eichler had come across a paper published by the EPFL (École polytechnique fédérale de Lausanne) which showed that, when a piece of text written by a human was submitted to an LLM (Large Language Model) and this model was then asked questions about its author, the LLM was able to identify details such as their gender, their age and where they live, without any of this being stated explicitly in the text.
“LLMs are models like ChatGPT which use generative AI to analyse, process and generate natural language”, explains Cédric Eichler. “We felt they were just what we needed for our research. To demonstrate this, we submitted a dataset that we had (hotel reviews, for which we knew the gender and the age group of the authors) to ChatGPT. In 78% of cases the chatbot was able to correctly guess whether the person who had written the review was a man or a woman. We then asked it to rewrite the review, deleting any information that would give away the author’s gender and adopting a neutral tone: the gender detection rate fell to 52%, which is close to random and reflects the make-up of the population. This result was enough to convince us to recruit a PhD student, Lucas Biéchy, when Petscraft was created back in June 2024.”
Innovative research into privacy
Central to this research is a question that hasn't been asked before: can LLMs disclose the identity of authors? And, conversely, are they able to protect their identities? Having only been created a few months ago, the team is just getting started on their research, and is currently seeking to determine the methodology they will use in order to get off on the right foot.
Verbatim
We are looking to define the rules and metrics for designing and validating PET (Privacy-Enhancing Technology), using LLMs to reformulate texts.
Associate professor at the Val de Loire Insa Centre
“We also want to explore residual risks in texts, the over-training of LLMs and the use other types of technology, including RAG (Retrieval Augmented Generation).” RAG is an innovative technique combining the best of information retrieval and AI content generation. LLMs create content using data learned during AI training. RAG makes it possible to consult an external document base in real-time in order to enhance text written by AI.
Developing a chatbot to protect pupils’ identities
Verbatim
Our aim is to create a chatbot capable of rewriting reports of bullying by school pupils in such a way that they cannot be identified, or to make them aware of any risk of being recognised.
Director of research at the Inria Saclay Centre
What’s next for this project? “We’re going to be taking things one step at a time”, says Nicolas Anciaux. “We don't have access to big enough datasets to immediately start researching cases of school bullying. Instead we will start with hotel reviews or other such texts for which the datasets are public. We will then turn to bullying at Grandes Écoles (specialised top-level educational institutions in France), which will be easier to address than at schools, given that the students are adults.”
What is the long-term goal?
“Our aim is to create a chatbot capable of rewriting reports of bullying by school pupils in such a way that they cannot be identified, or to make them aware of any risk of being recognised. Aside from school bullying, there are all sorts of ways that LLMs could be used in relation to privacy, such as analysing CVs or reading personal web pages.” Certainly one to keep an eye on.
Find out more
- Policy for addressing school bullying , French Ministry of Education, November 2024.
- More than one pupil per class affected by school bullying, Les Échos, 12/2/2024.
- How do cases of school bullying manage to escape detection by adults for so long?The Conversation, 1/12/2024.
- Exploring complex databases in order to tackle fake news and online hate, Inria, 4/3/2021.
- The four pillars of research in AI for education, Inria, 13/11/2023.
- Beyond Memorization: a website run by the EPFL where the general public can test their skills against LLMs.
For experts:
- reteLLMe: Design Rules for using Large Language Models to Protect the Privacy of Individuals in their Textual Contributions, DPM 2024 – International Workshop on Data Privacy Management @ ESORICS, Barcelona (Spain), 3/9/2024.