Speech Modeling for Facilitating Oral-Based Communication
Speech Modeling for Facilitating Oral-Based Communication

MULTISPEECH is a joint research team between the Université of LorraineInria, and CNRS. It is part of department D4 “Natural language and knowledge processing” of LORIA.

Its research focuses on speech processing, with particular emphasis to multisource (source separation, robust speech recognition),
multilingual (computer assisted language learning), and multimodal aspects (audiovisual synthesis).

The research program is organized along the three following axes:

  • explicit speech modeling, which exploits the physical properties of speech,
  • statistical speech modeling, which relies on machine learning tools such as Bayesian models (HMM-GMM) and deep neural networks (DNN),
  • modeling of the uncertainties due to the strong variability of the speech signal and to model imperfections.
Centre(s) inria
In partnership with
CNRS,Université de Lorraine


Denis Jouvet

Team leader

Delphine Hubert

Team assistant

Helene Cavallini

Team assistant