Sites Inria

Version française

Music

Jean-Michel Prima - 9/02/2011

Music in 3D

© James Thew - Fotolia.com

To better separate the different sounds within a recording in order to remix them at will and rebroadcast them in 3D: that is the objective of i3DMusic, a French-Swiss collaboration featuring two SMEs and two research centres. Higher-performing algorithms could lead to the appearance of new applications for music professionals, as Emmanuel Vincent, manager of this research at the Inria Rennes - Bretagne Atlantique centre, explains.

Edith Piaf fans can still not quite believe it. The film La Vie En Rose (La Môme) restores the singer’s voice with a closeness never seen before now. This is not just down to the magic of cinema. A precious contribution from an algorithm revisits vintage mono recordings to separate singing from the instrumental background, then creates a new mix in 5.1 format . This computerised extraction work was conducted by Audionamix, a Paris SME providing audio services for the entertainment sector. Football fans also have it to thank for the famous Vuvuzela Remover, a software programme capable of muting the blaring horns of South African supporters without losing out on any of the rest of the atmosphere or the match commentary.

From mixing to interactive spatialisation, such a feat opens up new perspectives for sound engineers and artists. However, the technology has come up against a sticking point: live performance. “In order to produce a high-quality 3D rendering and position sounds at will, it is necessary to have perfectly separated sound sources. This is rarely the case in reality: even when the music concerned is electronic or studio-recorded, the original tracks are generally not available”, observes researcher Emmanuel Vincent . In the case of a concert, for example, microphones inevitably record sound in which several sources, and several instruments, are combined. It is impossible for the moment to separate these signals on-the-fly. The result presents imperfections which are clearly audible.” These unacceptable jarring notes are known as artifacts.

A Eurostars project

Emmanuel Vincent

It is precisely with a view to solving this problem that Metiss, a Rennes-based Inria team (in common with the CNRS) has initiated a partnership with Audionamix, as well as the Laboratory for electromagnetism and acoustics at the Lausanne École polytechnique fédérale and Sonic Emotion, a Swiss manufacturer of speakers and DSP for 3D sound rendering. Baptised i3DMusic, this three-year collaboration is organised within the framework of a Eurêka project , and more precisely its Eurostars  section, which is aimed at SMEs. “The objective is to produce algorithms, to enable better real-time separation of a recording, and also to optimise spatialisation”, i.e. filtering to the speakers.

“The real-time algorithms used remain fairly simplistic for the moment. The ones we are going to design will without a doubt prove to have higher performance levels, nevertheless we do not yet know whether they will achieve the unique level of quality required.” This is because live performances introduce an additional restriction: it requires a balance to be found between the computing time and the acceptable limit for loss of quality. The very question is to know whether it will be possible for the artefacts to be made imperceptible to the ear. “We therefore place great importance in the assessment phase of these works, which will be conducted by EPFL psychoacoustics specialists. Should the real-time restriction prove too great, we will turn our focus to other application scenarios where quality will be at the fore, by carrying out a delayed separation beforehand, whilst retaining the possibility for real-time spatialisation of sources separated in this way.”

Keywords: METISS project-team Centre de recherche Inria Rennes - Bretagne Atlantique Musique Emmanuel vincent I3DMusic 3D Sound Spatialisation

Top