Speech Modeling for Facilitating Oral-Based Communication
Speech Modeling for Facilitating Oral-Based Communication

MULTISPEECH is a joint research team between the Université of LorraineInria, and CNRS. It is part of department D4 “Natural language and knowledge processing” of LORIA.

Its research focuses on speech processing, with particular emphasis to multisource (source separation, robust speech recognition),
multilingual (computer assisted language learning), and multimodal aspects (audiovisual synthesis).

The research program is organized along the three following axes:

  • The first axis deals with fundamental challenges related to deep learning, and aims at going beyond supervised black-box learning.
  • The second axis is related to the production and perception of speech, and exploits its physical dimension.
  • The third axis is dedicated to speech in its environment and concerns audio signal analysis and speech recognition.
Centre(s) inria
Inria Nancy Centre
In partnership with
CNRS,Université de Lorraine


Team leader

Delphine Hubert

Team assistant