Speech Modeling for Facilitating Oral-Based Communication
Speech Modeling for Facilitating Oral-Based Communication

MULTISPEECH is a joint research team between the Université of LorraineInria, and CNRS. It is part of department D4 “Natural language and knowledge processing” of LORIA.

Multispeech team consider speech as a multimodal signal with different facets: acoustic, facial, articulatory, gestural, etc. The general objective of Multispeech is to study the analysis and synthesis of the different facets of this multimodal signal and their multimodal coordination in the context of human-human or human-computer interaction.

The research program is organized along the three following axes:

  • Data-efficient and privacy-preserving learning.
  • Extracting information from speech signals.
  • Multimodal Speech: generation and interaction.
Centre(s) inria
Inria centre at Université de Lorraine
In partnership with
CNRS,Université de Lorraine


Team leader

Emmanuelle Deschamps

Team assistant

Delphine Hubert

Team assistant