Algorithmics

Pinocchio, the software that brings robots to life

Date:
Changed on 10/09/2024
For the past five years, the WILLOW joint project-team (Inria / CNRS / ENS PSL) has been seeking to improve the locomotion and manipulation capacities of robots through the use of perception as a means of interacting with the real world. Central to their method is Pinocchio, an open-source software program designed and developed by Justin Carpentier, a researcher at the Inria Paris centre. Having already been used for industrial applications, the tool is also now being deployed in a number of collaborative research projects, including at an EU level.
Tiago, un robot de dernière génération mis à disposition des partenaires du projet européen AGIMUS par l’entreprise barcelonaise PAL Robotics.
© Inria - Justin Carpentier / Tiago, un robot de dernière génération mis à disposition des partenaires du projet européen AGIMUS par l’entreprise barcelonaise PAL Robotics.

 

When moving around, humans make dozens of decisions every second in relation to their environment and any variations within it. In the same situation, artificial intelligence has to be able to solve problems with tens of thousands of variables, doing so hundreds of times a second. “A humanoid robot travelling from point A to point B will create intermittent contacts with its environment”, explains Justin Carpentier, a researcher with WILLOW (Inria/ENS/CNRS). “This will involve putting one foot in front of the other, over and over again, while avoiding obstacles. This is where model predictive control comes in, and more specifically Pinocchio, a calculation engine that is highly effective when it comes to describing complex interaction phenomena.” 

Essentially, a robot must be able to anticipate the consequences of an ongoing action, factoring in its perceptions of its environment in real-time.  “Robots are basically automatons, which means that the software that is used to control their movements must be able to calculate errors through prediction and correct ongoing movements by integrating the difficulty of the machine.” Pinocchio was designed to handle each separate stage in this process: perception, analysis and adjustment.

Enabling a robot to move around

It was while studying for his PhD with the Gepetto project team at the LAAS-CNRS in Toulouse that Justin Carpentier designed Pinocchio: “My PhD was on the computational foundations of anthropomorphic locomotion, i.e. writing an algorithm that would enable a robot to move around. At a practical level, this involves controlling its centre of mass and the motions its arms and legs have to make in order for it to move around.” Then, in 2018, Justin joined WILLOW, a team which specialises in difficulties linked to representation in the field of visual recognition, the goal being to enable the laboratory to expand into embodiment, the use of artificial perception to generate movement for robotic systems.

Having proven its worth, Pinocchio is now widely used in industry where, as a result of its generic design, it can be used to create reliable and effective models of a wide range of robots. “Robots on production lines have very little interaction with their environments. Robots are more autonomous inside logistics warehouses, where they can communicate with each other, but interaction remains limited. The real challenge lies in dealing with contact interactions when the robot is to be used outside of factories or laboratories.”

Understanding human movement

With a processing time that can be measured in microseconds, Pinocchio is highly effective at solving movement equations for complex systems. It can also be used to understand how human locomotion works.

The work carried out by WILLOW involves modelling phenomena resulting from contact interaction and making algorithms out of them, which are then implemented in Pinocchio in order to capture the interactions of a robot or human with its environment. We are currently extending Pinocchio in order to identify the dynamics of human movement and the effort involved based on visual data.

The concept involves filming humans carrying out activities, such as running or climbing for example, and then using the footage to create a digital twin by reproducing the movements in three dimensions on a visual map.

Although Inria and the CNRS, under the supervision of Justin Carpentier, are chiefly responsible for future uses of Pinocchio, the source code for which is available open-source, it is also being used in a number of EU projects.

Operating at an EU level

Launched in 2022, AGIMUS is a collaborative research project on robotics comprising nine partners from academia (the project leader LAAS-CNRS, Inria Paris and CTU Prague) and industry (the Toulouse-based companies Toward and Airbus, the Spanish company PAL Robotics, the Czech company Thimm and the Greek companies Kleeman and Qplan). They have come together to work on the research and development of new methodologies for generating the movements of robots, with a particular emphasis on manipulators. “Many robots used in industry are just automated arms on a stationary base. We are exploring the possibilities of perfecting the mobility of these robot arms in order to boost their autonomy and their interactions with operators.”

The project euROBIN, meanwhile, is comprised of 31 partners from 14 different countries. Supported as part of the Horizon Europe programme, its aim is to accelerate the industrial deployment of robotics and artificial intelligence solutions developed in the context of academic research. A number of Inria teams are active participants in euROBIN (WILLOW, Acentauri, Defrost, Chroma, Robotlearn, Rainbow), alongside the Larsen project team, shared by Inria and Loria, which is in charge of the “Personal robots” component.