Artificial intelligence

Exploring complex databases in order to tackle fake news and online hate

Date:
Changed on 06/12/2023
From bringing together disaggregated data in enormous databases to analysing message flows in order to identify conflict situations, machine-learning and deep-learning methods in artificial intelligence have massively accelerated the speed at which tasks can be performed. However, close collaboration with individuals in the field is required in order to determine the relevant indicators.
Essai de cartographie des sujets d'intelligence artificielle - atelier/démonstration lors du GFAIH
© Inria / Photo A. Bacquet

Giving journalists the tools they need for fact checking

Ioana Manolescu, head of the Cedar research team, has been investigating ways of helping journalists to fact-check information using data that are available online. After publishing one of the very first scientific papers on the use of digital technology in fact-checking back in 2013, the team launched the ANR (French National Research Agency) project entitled ContentCheck in 2015, working in collaboration with researchers from Irisa, Limsi, Liris and Paris Sorbonne University. The editoral team responsible for the “Les décodeurs” column in the Le Monde newspaperlso took part in the project.

The tool developed can be used to aggregate information from diverse databases, making it easier to exploit and to correlate. Users can modify the format and user-friendliness of enormous databases, like created by INSEE, making them easier to use and enabling automatic cross-referencing operations that would require hours if done “by hand”.

Detecting the cyberbullying of teenagers to enable a swift response

Whether we are talking about insults, photo-montages or emojis, identifying keywords is not sufficient to differentiate between young people teasing one another online and cyberbullying. Close collaboration between computer scientists, sociologists and psychologists is needed in order to identify high-risk situations early on and to ensure that victims are given the support they need. This was the thinking behind the CREEP project (Cyberbullying Effects Prevention), which was funded by EIT Digital in 2018 and 2019.

Among the partners from France, Italy and Germany were two members of the Inria project team Wimmics: Elena Cabrio, a assistant professor at Université Côte d'Azur, and Serena Villata, a research fellow at the CNRS. Their role was to develop an algorithm for detecting cyberviolence, drawing on indicators devised alongside their colleagues from the human and social sciences, such as specific emotions and feelings. In 2020, the researchers were granted funding from Otesia*, which enabled them to continue with the project and adapt it from its original Italian into French. This funding also enabled them to visit secondary schools in the Greater Nice metropolitan area to talk to pupils about cyberviolence.

Immigration: detecting and analysing online hate speech

Social media has proved a fertile breeding ground for hate speech, particularly against migrants. M-Phasis is a project that was created to explore how this hate is expressed in comments submitted by web users. Co-financed by the French National Research Agency and its German counterpart from 2018-2022, this project brings together computer scientists and specialists in the human and social sciences from the two countries. It sets out to identify and compare the prevalence of anti-immigrant speech on either side of the Franco-German border, as well as the factors influencing its emergence.

Three members of Multispeech, an Inria-Loria team, were recruited to the project for their expertise in natural language processing*: Irina Illina (lecturer at the University of Lorraine), Dominique Fohr (research fellow at the CNRS) and Ashwin Geet D’sa (a PhD student). Although the research they have been carrying out is primarily exploratory, it is hoped that in the long term, it could facilitate the moderation of online media and social networking platforms through the development of a tool capable of automatically detecting potential instances of hate speech.

 

Otesia: Observatoire des impacts technologiques, économiques et sociétaux de l’intelligence artificielle - Observatory for assessing the technological, economic and societal impact of artificial intelligence

** In artificial intelligence, natural language processing enables automatic speech recognition.

Digital technology and the human and social sciences: a symbiotic relationship