Robots capable of interacting with humans
What if robots could take on the repetitive tasks involved in receiving the public? Science fiction? For the moment, maybe... We know there are already forms of artificial intelligence which are capable of interacting with humans. One such example is the therapeutic robot seal Paro, which is used in the healthcare field to provide the benefits of zootherapy to patients with cognitive disorders. However these mediation tools have but rudimentary possibilities, or require remote control by an engineer.
“They lack the ability for multi-person interaction. If two people are having a conversation or if they both ask the robot a question, present-day robots are incapable of understanding,” says Xavier Alameda-Pineda, head of the RobotLearn team at the Inria Grenoble Rhône-Alpes centre.
Numerous perceptive skills are required
While there are “butler” robots which can provide the weather forecast or give geographical directions, they are not able to execute complex social tasks autonomously, such as escorting users around a building. To be able to carry out such tasks, a social robot must be capable of perceiving and distinguishing signals emitted by different speakers, understanding these signals and identifying that they are addressed to the robot, and then react accordingly. This is a daunting challenge, because it requires numerous perceptive abilities and a capacity for automatic learning in order to execute autonomous decision-making.
To address this issue, Xavier Alameda-Pineda, a specialist in audio-visual perception, teamed up with seven partners of renowned expertise in complementary fields to his own (see slide).Together, they run SPRING (Socially Pertinent Robots for Gerontological Healthcare), a 4-year, € 8.3-million project funded by the European programme Horizon 2020 which the young researcher is responsible for coordinating.
SPRING in video
A multi-disciplinary project
Four of the partners are from the academic world (Prague Technical University, Czech Republic; the University of Trento, Italy; Heriot-Watt University, Edinburgh, UK and Bar-Ilan University, Israel).
Alongside Robotlearn, the task of these research teams is to develop the dialogue, audio processing, human behaviour analysis and spatial awareness system of ARI, a humanoid robot developed by one of the project’s technology partners, Pal Robotics. ERM Automatismes, the other technology partner, is in charge of integrating the various advances made by the researchers in the robot software. Lastly a hospital, AP-HP (Ile-de-France University Hospital), which already uses the robots Paro and Nao in the Broca Living Lab, will provide SPRING with an experimental framework consisting of patients and medical staff.
The project partners
An artificial intelligence system capable of self-correction
But how do we enable a robot to identify from a set of conversations which request is addressed to it; to understand that it is being asked where a person may sit; to look around and find a vacant seat, determine the path to accompany the speaker to their seat while avoiding other patients and staff on the premises, and then perceive the relevance of offering distraction in the form of conversation? There are numerous technological difficulties and hurdles to overcome in order to accomplish this type of complex task.
With regard to movement, RobotLearn opted to implement the reinforcement learning method. In order to determine its speed, approach angle and other parameters of movement, the robot is trained through an artificial intelligence system which calculates the adequacy between optimal action and the action actually undertaken, and attributes “rewards” for successful outcomes. This training phase enables the robot to come across a wide variety of possible cases in full autonomy, without human intervention to correct pathways. Once placed in real conditions, ARI continues to learn and identify the optimal action for each situation. This opens up the possibility of its use in a hospital setting.
Using a hospital setting to test the assets of a social robot
This is the aim of the second phase of SPRING, which is due to start in 2022: to validate the use of the robot in a hospital and to assess its impact on users and their habits, in addition to its acceptability. Entrusting even simple social tasks to a robot is nevertheless far from innocuous and raises numerous ethical and organisational issues, which Etienne Berger, a research engineer at the Broca Living Lab, is looking into (see insert).
In any case, one thing is clear for Xavier Alameda-Pineda: “I believe it’s neither possible nor desirable to replace humans by robots.”It is more a matter of complementarity, and of taking advantage of what the social robot can provide, i.e., the ability to carry out repetitive tasks with a constant level of quality and a lack of judgement which can reassure users.
Ethics and acceptability
To supervise the research being led by the SPRING consortium, a strict ethical framework based on existing recommendations and directives was put in place. Approved by the competent bodies (CPP, CNIL, etc.), it ensures the protection of subjects involved in the hospital trials.
At the same time, Etienne Berger, who is responsible for monitoring these issues for SPRING, is leading a study on the ground to assessthe acceptability of the robot by various service stakeholders (users and accompanying persons, medical and administrative staff) and to measure the organisational impact of the robot on the running of the service.
Beyond the protection of personal data (in particular, medical data), this project raises questions with regard to the management of psycho-social effects of the robot on the various stakeholders of the hospital service (patients and accompanying persons, medical and administrative staff). According to the researcher, the watchword is “to anticipate risk situations and take the vulnerability of users into account.” Initial surveys in the field reveal a mixture of apprehension and expectation among future users, in addition to curiosity and pragmatism. The implementation of these new forms of gerontechnology would appear to be inevitable.
Xavier Alameda-Pineda in five key dates
- 2010: 2nd year of Masters in Computer Science in France, then thesis with the Perception team at INRIA Grenoble Rhône-Alpes.
- 2014 - 2016: Post-doctorate at Trento University, Italy.
- 2016: Joins the Perception team once again, as research fellow.
- 2018: Receives the SIGMM Rising Star Award from the scientific society ACM, for his contribution to the understanding of multi-modal social behaviour.
- July 2021: Becomes head of the RobotLearn team, successor to Perception.