Street cacophony, subway screech, dishwasher hum . . . Modern life comes with all kind of brouhaha, which can make conversation rather problematic at times. “The human body isn't equipped to cope with such a noisy background. When we come to think about it, until the industrial revolution, our planet was a rather quiet place. So there wasn't really any physiological need for sound filtering, ” says Aron Kapshitzer , co-founder of 5th Dimension , a startup whose glasses will provide users with an augmented hearing experience. “But first and foremost, our eyewear is actually meant to be a beautiful fashion accessory. It comes in different shapes, different colors, different finishes. And it obeys all the codes of stylish design just like any other brand you'll find in your favorite optical store. Sure enough it packs some high tech, but technology must only be the unobtrusive icing on the cake. Not a geekish heavy gadget on your nose. ” The novelty of 5th Dimension's glasses is thus discreetly nested in the frame's legs. “Our solution relies on the control of bone conduction. A miniaturized transducer creates a slight vibration on the skull bone which allows to transmit sounds directly to the inner ear without passing through the auricle, ” fellow co-founder Sophie Serrero explains.
Like Deep Etching in Photoshop
The miniaturization of this electronic hardware is one of the most daunting challenges. Yet, another crucial ingredient regards the software behind the scene. “Think of Photoshop, Kapshitzer suggests. You can deep etch something of interest in an image, then crop out the background. It's the same thing with sounds. You'll be able to turn your head toward someone, touch one of your glasses' legs, select the sound source that you want to hear better and remove the noisy background. In order to do that, we needed two specific technology bricks. One for sound localization. The other for sound separation. ” And that is where Inria comes into play. “Not long ago, Serrero recounts, we were hosted in Station F, the biggest startup incubator in Paris, in which Inria happens to maintain an office. That's how we learned of the research conducted in the field of sound and signal processing at Inria center, in Rennes .”
The buck was passed to Cécile Martin , business development manager at Inria Rennes. “Having a representation at Station F is really useful, indeed, she says. Otherwise, we might as well have missed this opportunity of collaboration. Our relationship with 5th Dimension will follow a two-step process as we have two technologies that are at different stages of maturity, both being research findings of Panama, ” a scientific team that specializes in sound and signal processing. “The sound localization software is mature enough. So it will be mostly a matter of engineering and adaptation to the company's specific context. We will have an in-house engineer working on this for two months. Offering fast-paced engineering services is precisely the purpose of InriaTech. It enables companies to get a proof of concept or a prototype very rapidly, which is often paramount, in particular for startups. The second step regards sound separation and calls for more research. It will take the form of a 8-month work package involving more input from Panama's scientists as well as the recruitment of a research engineer for the same duration. ”
A 20-Year Research Endeavor
As Rémi Gribonval, head of Panama , confirms “the first tool, Multichannel BSS Locate, has been available in MATLAB for a while. It addresses the problems of spatial localization and features several algorithms that are capable of aggregating the data coming from multiple pairs of sensors. So, the first part of the collaboration will essentially consist in adapting this tool to 5th Dimension's use case. Part two involves FASST, our toolbox for audio source separation. All the tools in this box are not interoperable yet. And this collaboration will provide us with the opportunity to achieve this interoperability. Another aspect regards what we call the source models. Separation works fine provided it can rely on good source models. So we need to identify those in the context of the glasses and we will have to train our software to this particular scenario. The models at hand so far do not take into account the fact that the sound environment is, say, a cathedral, a business office or a living room. This impacts the kind of ambient noise that we will have to model and upon which we will have to test the robustness of our methods. So there is a scientific interest in this collaboration. We hope it will bring to the surface new problems that will fuel our research in the future. Of course, it's also gratifying each time we see a company use our algorithms. It's the result of a long-haul work, for we started working on source separation nearly 20 years ago .” As for the glasses, “they still require some maturation time. We are striving for perfection as our intent is to deliver the best possible user experience, ” Kapshitzer says. “We'll have a functional prototype this winter, but there will be a lot of testing thereafter, Serrero delineates. Be that as it may, we plan to have the product in optical stores in the fall of 2019. ”