Stuttering affects nearly 1% of the population (more than 600,000 people in France). A communication disorder - recognized as a handicap by the Departmental House for the Handicapped (MDPH) - which mainly affects men, regardless of their language.
But what are its motor and acoustic characteristics? How can we facilitate its diagnosis, and then its management? Questions that have led the language science laboratory Praxiling to launch, in 2019, an ANR project dedicated to stuttering.
BENEPHIDIRE: providing additional knowledge about stuttering
Named BENEPHIDIRE, the main objective of this project is to provide additional knowledge on stuttering, in order to propose an aid to diagnosis and management by speech therapists.
For several years, Fabrice Hirsch (Director of the UMR 5267 Praxiling) has been working on stuttering from an acoustic and articulatory point of view, using an articulograph. We therefore wondered how we could go further on this research, by approaching this problem through several axes
Slim Ouni, head of the Multispeech project team
What is an articulograph?
Thanks to small sensors glued on the tongue, lips, teeth, and jaw of a subject, the articulograph allows to obtain information relative to the various movements of the articulators during the production of speech. This acquisition technique allows the collection of a large amount of articulatory data and is an alternative to X-ray images, which are now forbidden outside the medical context.
Behind this project: a multidisciplinary team, made up of researchers from the INM (Montpellier Neuroscience Institute), Praxiling, LiLPa (Linguistics, Languages and Speech) and LORIA (Lorraine Laboratory for Research in Computer Science and its Applications), but also speech therapists, the first beneficiaries of these research results
"It is important to know that a speech therapist spends a lot of time in stuttering rehabilitation, through several appointments with the patient", explains Slim Ouni, before adding "this project aims, in the long term, to develop tools that will allow them to do a personalized remote monitoring of people who stutter, thus reducing the pressure on speech therapists and allowing an efficient rehabilitation of patients".
The BENEPHIDIRE project has three main focuses:
- The first axis is to work on a neurological marker of stuttering, the frontal fasciculus aslant (FAF). Its objective will be to verify, among populations of adults and children, if the integrity and connectivity of this structure can be used as an indicator of the severity of a stuttering and its risk of chronicization.
- The second aims to study the acoustic and motor characteristics of disfluencies typical of stuttering. To achieve this work, audio and articulatory recordings will be acquired.
- The third one will allow us to carry out a feasibility study on the automatic identification of disfluencies with the aim of developing, in the long term, a telephone application allowing people who stutter to self-assess their fluency and to practice speaking.
Detecting stuttering through audio: Multispeech's contribution to the BENEPHIDIRE project
It is in the third axis of the BENEPHIDIRE project that the Multispeech project-team, shared by the University of Lorraine, Inria and the CNRS, is involved. Specialized in speech processing, its objective is to propose tools for the automatic detection of stuttering from audio, and then from an audiovisual signal.
"We start from recordings of people who stutter to develop algorithms capable of detecting whether speech is fluent or disfluent in people who stutter, and in the latter case, we detect the characteristics of disfluencies, such as blocks, repetitions, or prolongations," explains Slim Ouni. "We hope that our work will one day help in the development of a tool capable of assessing the severity of disfluency in the person who stutters, to allow speech therapists to propose exercises adapted to each person they treat. The results we have obtained so far are satisfactory, but there is still a long way to go before we can hope to see these results translated into real life," he says.
The next step for Multispeech researchers is to also use visual data, such as the patient's face, to provide additional information in the detection of stuttering.
When we rely solely on audio data from a patient, it is sometimes difficult to know if the patient has finished speaking or if there is a block in the articulation of a word. Visual data would allow us to answer this problem
The team, confronted until now with a lack of data for this line of work, is currently working on first experiments based on a recent stream of recovered data. The objective: to define the relevance of the visual in the detection of stuttering and, why not, to go further in this research.