“When we talk of putting a robot within a crowd, an autonomous wheelchair for instance, we do not mean a machine that would keep, say, a three-foot security distance from pedestrians. That would only be a robot beside a crowd, not within. What we mean is a robot moving at close proximity to people and, at times, having physical contacts with them” sums up Julien Pettré, scientific coordinator of the Crowdbot project. Working from that premise, the whole point of this research is to make the robot's to-ings and fro-ings as smooth as possible while keeping the occasional bumps gentle and safe.
The consortium involves five academics (RWTH Aachen University, ETH Zurich, EPFL, Inria and University College London) as well as two companies (Locomotec and SoftBank Robotics Europe). The partners are tackling a variety of challenges in accordance with their respective area of expertise. At Inria, scientists are focusing on the study of Human-Robot Interaction (HRI) through simulation and virtual reality.
“With the help of simulation tools, one can throw virtual humans and virtual robots on collision course at different speeds and different angles. Based on these parameters, abacuses will then tell us what are the forces exerted and what is the level of dangerousness of such contacts. For instance, given the geometry of a robot and the particular shape and motion of a human body, we have ascertained the frequency of contacts for each part of the body. Typically, one can note that there is a risk of wheelchairs' armrests colliding with children's heads.”
However, as efficient as these methods might be, a problem persists.
Whereas robots react the way they are programmed to, humans, in real world, display a variety of personal reactions that are complex in nature and thus remain hard to model and simulate.
scientific coordinator of the Crowdbot project
Indeed, pedestrians are not all the same. Some shuffle whereas others stride at fast clip. Some react with swift motions whereas others are less responsive. Some are fit and agile whereas others are more lumpish or frail. Some amble absent-mindedly while other display acute awareness to their surroundings.
The possible combinations of these characteristics run quite a gamut. “Such variety is hard to cover. It would call for a huge number of experiments with real robots and many humans.” Not to mention a few bruises along the way.
Hence the idea of tackling this problem through virtual reality. “It will enable us to physically separe both entities ―the robot and the human― and then to reunite them in a virtual environment. We immerse a pedestrian wearing a Head Mounted Display (HMD) in a computer-controlled virtual situation. We also immerse a Pepper robot in a similarly virtual situation.” No HMD needed for the latter, though. “We bypass this step by directly feeding the robot with the data that would have been sensed if it was sensing the virtual scene through its camera or whatever sensor it has.”
A Bonanza of Cinematic Data
The robot and the human perceive one another as if they were actually face to face. “We place them in close-to-contact situations. Each one reacts according to what the other does. In addition to their respective perception of each other, we can play a whole environment, laying obstacles, adding simulated pedestrians including some who might be interacting with one another. What we study is the moment before the collision. We let the robot react at close proximity of the human and the human react at close proximity of the robot. The collision configuration is rather realistic compared to what is done in pure simulation.” The approach also yields a bonanza of cinematic data including not only trajectories but also body postures for instance.
Leveraging VR to study the immersion of mobile robots within crowds is a worldwide first. “It actually calls for quite a substantial assembly of technologies, be it crowd behavior simulation, animation, rendering, immersion, etc. These domains happen to be well covered by the various scientific teams affiliated to our research center, here, in Rennes.” Also instrumental is the fact that this university town boasts two of the largest CAVE facilities in Europe: Immersia and Immermove.
A Whole Array of Possibilities
“This novel tool opens up a whole array of possibilities, says Crowdbot Project Manager Solenne Fortun. We can simulate everything, simulate only the robot, simulate only the human, immerse the robot in VR and simulate everything else, immerse both people and the robot in a VR situation. Even if the contact is not rendered ―for we don't have the technology to do that― we are able to assess this contact situation in a way that is much more consistent with reality.”
Still a work in progress, the platform will be made available to the whole research community sometime in the future. In essence, it can serve three purposes. First: study the human reaction. “We don't really know how humans react to robots given the machine's shape, its speed, its aspect, etc. It's hard to tell beforehand.”
A Testbed for Manufacturers
Second: assess robot behavior. “One can test the efficiency of algorithm A versus algorithm B, ascertaining which one delivers the best level of security.” As such, the tool could become a testbed for manufacturers interested in immersing their robots in a VR crowd situation and check how they fare. Third: collect data. “Inference from observed data is becoming increasingly important as machine learning methods turn out to be very efficient at predicting the human behavior.” On top of that, the very same methods can also enable the robot to learn navigation tasks.
A vast corpus of metrics might be of service to the European regulation authorities as well. As Pettré points out, “safety standards already exist in the field of cobotics in order to cover the interaction between workers and industrial robots. Pretty much in the same fashion, one can imagine that our current research will ultimately lead to similar standards, norms and laws which any robot treading public space will have to comply with.”