The fundamentals : fluidifying interactions between humans and cobots

Date :
Changed on 09/07/2020
Pepper, robot humanoïde
© Inria / Photo C. Morel

Cobotics and its broad palette of tools

Working as a pair on a simple task, such as moving a wardrobe, for example, can be complicated. But when one of the pair is a cobot, the task becomes even more complex. How can the task be divided into sequences? For each sequence, who is to direct and who is to execute? How might cobots perceive, interpret or anticipate what the actions of the operator? How can you make sure that a cobot employs the right amount of force and acceleration? How can accidents be prevented without impinging upon people’s freedom of movement?

In response to these questions, researchers employ a large palette of tools, including simulation, field experiments, group models and cognitive sciences.

Making robot behaviour “socially acceptable”

The goal of Spring, an EU project coordinated by Inria (the Perception project team), is to design a companion robot that will sit in the waiting room of geriatric hospitals and which will be responsible for informing, reassuring and entertaining patients and their families. The cobot's behaviour will need to be “socially acceptable”, which will make it easier for it to interact with individuals or small groups.

It would be impossible to make a list of all of the possible situations and to identify the ideal behaviour in each case. Instead, researchers opted for reinforcement learning. This involves determining the criteria for a successful interaction, such as its minimum length, for example. The cobot is then subjected to simulated situations, before being evaluated on its behaviour based on the established criteria. After this, the cobot will continue its learning with real people.

Such robots will be able to accompany elderly individuals on trips to the hospital: their friendly appearance will encourage people to interact, particularly if the patient is suffering from the type of cognitive issues experienced with Alzheimer’s disease. They will also be capable of making conversation and providing leisure and relaxation activities as part of high-quality care, in addition to that provided by healthcare professionals.

Maribel Pino, director of the Broca Living Lab, Broca Hospital - Assistance Publique – Hôpitaux de Paris

Driverless vehicles and pedestrians: the need for cohabitation

Digging deep into the available resources in social psychology, proxemics,* machine learning and deep learning, the Inria project teams Chroma and Pervasive are focused on a delicate task: optimising the way in which driverless vehicles and pedestrians cohabit dense urban areas. This requires vehicles to be able to “see” pedestrians, to interpret their attitude in order to predict how they will act (e.g. are they planning to cross?), to be able to give clear signals to pedestrians, using LED lights fitted to the bodywork, for example, etc. It would take years to master the wide range of different scenarios, but one common goal is shared by both robots and humans alike: to keep out of each other’s way.

*the study of human use of space

A “tree of possible futures” for programming a flying co-worker

Could it be possible to supply a worker on scaffolding with parts and tools using an autonomous drone with a robot arm? This is the scenario being studied as part of the French National Research Agency project ‘Flying co-worker’. It is a complex undertaking: the drone will base its actions exclusively on its visual observation of the worker. In turn, the worker will only want to cooperate if the drone acts appropriately.

Inria project team Larsen, which is involved in this project, works on tools reasoning on a “tree of possible futures”. In much the same way as in chess, this involves predicting how the worker might react before each action the drone performs. Even after “pruning” this tree of irrelevant branches (hypotheses), there are still several thousands of them. The tree will be used as a basis for the drone control algorithm.