Virtual reality

Moveon - a solution for improving the reliability of tracking systems in dynamic environments

Date:
Changed on 13/04/2021
Moveon came out of a partnership between the DFKI (the German Research Center for Artificial Intelligence) and Inria, and is aiming to develop a new generation of tracking algorithms that will enable geometric reasoning to be carried out on high-level primitives taken from learning.
Moveon
© Unsplash / Photo Thor Alvis

 

Back in January, Inria and the Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) organised a first workshop between their research teams, having previously signed a memorandum of understanding geared towards joint research in artificial intelligence.  The goal was to get teams from the two countries to work together in interest groups in order to come up with ideas and joint projects.

Combining skillsets in augmented reality

Magrit, an Inria team, and Augmented Vision, a DFKI team, decided to bring together their expertise on tracking in augmented reality, the use of which now extends far beyond driverless vehicles. 

Moveon
© Inria / Photo Raphaël de Bengy

The two sets of researchers are rather complementary when it comes to this subject: Magrit have conducted noteworthy research in 3D monitoring for augmented reality, with promising results obtained recently in tracking, using the concept of recognised objects as markers; while Augmented Vision have a great deal of experience in SLAM (reconstructing environments and tracking moving cameras in real-time) and have recently worked on designing end-to-end systems for tracking and reconstruction.  

“Both teams enjoy a good amount of visibility in the field of augmented reality, in addition to sharing the desire to perform geometric reasoning on high level primitives in order to make augmented reality systems more durable and more reliable”, explains Marie-Odile Berger, the researcher who heads up the Magrit project team.

More reliable, flexible systems

Christened Moveon, the project team intends to draw upon the progress made in object recognition and in deep learning more generally, the goal being to develop a new generation of tracking algorithms. In a practical sense, the recognition and understanding - based on deep learning - of high-level concepts such as leaking points or classes of voluminous objects will be used for spatio-temporal tracking and for reconstructing environments, where geometric reasoning will also be used.

 “As opposed to end-to-end systems, which require comprehensive learning and which struggle to integrate the inherent constraints of perspective projection or 3D modelling into their structures, what we want to do is to design more flexible systems, limiting the need for re-learning when it comes to handling new environments. These systems will be easily scalable and will have greater temporal reliability given their capacity for accurate geometric reasoning on objects”, explains Marie-Odile Berger.

The project, which was officially set up in late August, had its kick-off meeting on 10 September. Two PhD students and an engineer were recently recruited, the aim being to showcase these tracking algorithms through project demonstrators.