A project to attack the audio knowledge base
From pop and rock to jazz and reggae, thanks to the internet, listening to songs has never been so easy. But consuming songs individually doesn't lend itself to analysis or in-depth study. What do Serge Gainsbourg’s lyrics talk about? What chords would we typically find in a David Bowie song? Who did Queen write, arrange and produce their 19 albums with? Which artists influenced The Rolling Stones?
“Such questions are of interest to musicologists, composers, music teachers, specialist journalists and all music lovers”, explains Michel Buffa, a professor at Côte d’Azur University and member of the project team Wimmics (I3S/Inria). “There are powerful modelling, analysis and machine learning tools for creating extensive knowledge bases. This is how the French National Research Agency project Wasabi came about, which was conducted between 2017 and 2021.”
Partners including Deezer and Ircam
It’s worth pointing out that Michel Buffa is himself a rock music enthusiast, having played guitar in amateur groups. He launched Wasabi in 2017 alongside several partners, including Deezer, which gave them access to audio files for two million songs, and Ircam (a French research institute for music and sound). Investigations went in three directions.
The first of these was lyrics analysis, the aim being to identify the themes of songs, the places and people mentioned in them, the range of vocabulary, the verse-chorus structure, and so on. Then came audio analysis, separating the overall sound into individual tracks for each instrument, recognising musical genres, identifying keys, chords, chord progressions, and so on.
Using machine learning to detect emotions
Finally, the researchers collected metadata on songs from around twenty or so sources, from the most well-known (Wikipedia, Deezer, Spotify and YouTube) to the most unexpected, such as equipboard.com, a website which provide the makes and models of electric guitars used by hundreds of artists across their careers.
From a scientific perspective, the hardest thing was bringing the audio and semantic data together. Using a German karaoke website, we were able to synchronise the lyrics and music to more than 500,000 songs. We relied a lot on machine learning, to detect emotions expressed in songs, for example, cross-referencing sound, lyrics and the verse-chorus structure.
Professor at the Université Côte d'Azur and member of the Wimmics project team
Finally - precise classification by musical genre
After four years of work, what the researchers had been able to produce was a goldmine. Never before had so much data been collected on so many songs: themes, songwriters, composers, performers, musicians, instruments used, recording locations, producers’ names, album titles, etc.
This was also the first time that so many songs had been classified by genre with such precision. “This is a recurring issue for streaming services, which are given vague and often inaccurate information from record companies”, explains Michel Buffa. “An artist like David Bowie is considered pop-rock, but he explored a dozen different musical genres.”
Such confusion leads to streaming services promoting the most popular songs, which then become even more popular. As a result, only 1% of artists’ back catalogues actually gets listened to.
“Our classification, which is based on combined analysis of audio and lyrics, can be used to make suggestions which are much more varied, targeting the musical tastes of individual listeners.”
New services through web audio
All that was left to do was to make this data accessible to all, using just a web browser. The researchers from Wasabi managed to do this by employing semantic web query languages (RDF, RDFS, SparQL) to describe the data and to formulate queries.
The possibilities became endless, with examples of its capacities including: finding every version of “My Way”, determining the prominence of themes such as love, death or money in songs from the 1990s, checking whether or not a song has been plagiarised, viewing Led Zeppelin's discography and listening to a two-minute long audio summary, and listing every musician that Steven Tyler, lead singer of Aerosmith, has played with during his career.
And this is far from an exhaustive list.
At the same time, Michel Buffa imagined new services made possible by the W3C (World Wide Web Consortium) Web Audio Standard, which provides browsers with unprecedented sound synthesis and processing capacities. Music teachers, for example, can use it to separate songs into individual instrument tracks. Say a guitar teacher wants to teach a pupil “Smoke On The Water”. All they have to do is send them the guitar part for them to work on, and the rest of the song without the guitar - that way the pupil can play their part while being accompanied by Deep Purple.
An online guitar amp and synthesiser
Wasabi have also developed an online synthesiser capable of reproducing the sound of a number of models available on the market, including one famous model which costs the princely sum of €5,000.
Another service is an online guitar amp simulator. Simply connect your electric guitar to a sound card and a PC, and you will have access to blues, metal or acoustic sounds, in addition to a range of different effects, as though you were in the studio or on stage. A marketing agreement was signed for this simulator with CNRS.
The contents of the Wasabi database are for scientists only, to be used for academic research projects, with strict rules regarding use designed to protect copyright. That said, anyone can submit a query from their web browser. Enthusiasts haven't done exploring its two million songs...
Find out more
- The Wasabi project: making music accessible to everyone | Michel Buffa - Cabaret de la Science, L’Esprit sorcier, 24/10/2018.
- Music, AI and the artist, Centre national de la musique, 27/4/2021.
- How artificial intelligence is revolutionising the music industry, Le Figaro, 28/2/2020.
- Antescofo - the start-up using AI to revolutionise the way in which music is played, Inria, 6/12/2018.