Antescofo : music and computer science playing the same tune
Can a machine listen to and enter into dialogue with a musician in real time when he is playing a piece of music? Arshia Cont, research director of an Inria/CNRS/Ircam team has taken up the challenge by created the start-up Antescofo, who use a software developed with the MuTant research team and which combines real musical instruments with electronic sound production devices.
How did your project come into being?
Arshia Cont : Antescofo combines two passions of mine since I was a teenager: computer science and music. Whilst studying mathematics, I became interested in signal processing. And during my thesis on music and computer science, I asked myself the following questions: can a computer play on stage with musicians in the same way as a real-life musician? In other words, can a machine be equipped with musical intelligence? That is the challenge addressed in part by Antescofo.
What is the principle behind it?
Arshia Cont : It's quite simple. When several musicians play together in real time, they listen to and synchronise with each other. We have transferred this extraordinary human capacity to the machine which, as a result, plays the role of a musician and is capable of performing a score. It is equipped with listening and synchronisation faculties, which are the main characteristics of the Antescofo system. Thanks to these principles - which are derived from the reactive and temporised languages used in aeronautics - it is now the machine that adapts to the human way of playing, and not the other way round. In order for the computer to be capable of interacting and entering into dialogue with humans, we have developed an artificial intelligence algorithm tested by music professionals and internationally-renowned orchestras.
Is this programme also aimed at the general public?
Arshia Cont : Absolutely! With this software, any musician who wants to practise and play his part all alone can - in the absence of other musicians - delegate the parts for the other instruments to pre-recorded sounds or instrument synthesisers. In other words, the accompaniment can be replaced by digital material. The interest of this real-time dynamic anticipation system lies in an ability to slow down or accelerate playing speed.
This real-time musical interaction is quite a crazy idea...
Arshia Cont : Musicians who succeed in playing and finishing a piece together have an almost magical anticipatory capacity. Ensuring that the software is capable of detecting where in a score a performer is at whilst he is playing is pretty exhilarating, and that is what guided our research. In this way, Antescofo anticipates - thanks to an artificial ear - and reacts in the correct manner, like a sort of on-the-spot performance. As a result it is an open system, creating live interaction. The result is impressive, as this demonstration video shows.
How do you see Antescofo developing?
Arshia Cont : Today, everybody is a music consumer, and the idea is to make it increasingly available - even without any musical training. Our strategy is to move into the digital music market. Antescofo allows the user to get away from simply consuming music and to become involved in its creation. First and foremost, this programme pertains to the areas of pleasure and entertainment, open to all music-lovers: those who are interested in musical composition as well as performance enthusiasts. And so a new type of karaoke is born, where the software calculates the music based on the voice - even if it is out of tune. It is a wonderful perspective that only computer science can render accessible.
Why did you choose the start-up model to develop Antescofo?
Arshia Cont : The start-up is a way of working that stimulates innovation, strengthens team spirit and increases reactivity. The chosen model is in line with current web trends, such as the "lip sync" and "Mashup" cultures phenomena. Their social potential is enormous and, most importantly, under-exploited at present. The start-up adventure is exciting, but there are many pitfalls - and that is why support is necessary in order to avoid them.
Lip sync refers to all of the techniques aimed at ensuring the synchronisation of, on the one hand, lip movements and, on the other, the words or sounds that are supposed to be pronounced.
Mashup consists in creating a song or musical composition from two or several songs that already exist.
These articles could interest you:
The start-up project
Name : Antescofo
Date of creation : 2015
Location : Paris
Domain : musique numérique
For more details
Start-up Inria 2005-2017
The technology companies originating from Inria manufacture products stemming from research prototypes or disseminate the know-how acquired by the Institute. Their founding teams include a former member of an Inria team.