Sites Inria

Version française




This workshop will bring together researchers interested in the computational aspects of enabling robots to interact with and learn from humans.

  • Date : 21/07/2016 to 22/07/2016
  • Place : Inria center of Paris 2, rue Simone Iff 75012 Paris, Room Jacques-Louis Lions (1)
  • Organiser(s) : Anca DRAGAN and John CANNY


July 21
9:15am Welcome and Introductions
Function and Communication
9:30am  Rodolphe Gelin, "The Human-Humanoid Interaction"
10:00am Ross Knepper, "Duality of Robot Actions in a Collaborative Context"
10:30am break
11:00am Brad Hayes, "Autonomous Task Assistance via Human-Robot
Collaboration: Learning and Synthesizing Supportive Behaviors"
11:30am Rachid Alami, "On decisional abilities for a cognitive and
interactive robot"
12-12:30 panel
2:00pm welcome from INRIA Paris director Isabelle Ryl
Driving and rolling
2:10pm Jean-Paul Laumond, "Humans are not walking, they are rolling!"
2:40pm Fawzi Nashashibi, "Human-vehicle interaction: integrating driving behaviours in vehicle automation"
Formalisms for interaction
3:10pm Anca Dragan, "How Robots Influence Our Actions"
3:40pm Dylan Hadfield-Menell, "Cooperative Inverse Reinforcement
Learning: Human Robot Interaction as a Cooperative Game"
4:10-4:30pm break
4:30pm Vince Hayward, "Invariants and Perceptual Illusions in Touch"
5:00pm Jean Ponce, "New Techniques for Image Matching"
5:30-6pm panel

July 22
Task structure
9:30am  Scott Niekum, "Discovering Structure in Robotics Tasks via
Demonstrations and Active Learning"
10:00am Jean-Pierre Merlet, "Learning human walking pattern with robots"
10:30am break
Assistive Robots
11:00am Marie Babel, "Wheelchair mobility assistance: enhancing the
driving experience"
11:30am Brenna Argall, "The Question of Control for Interactions with
Assistive Robots"
12-12:30pm panel


  • Rachid ALAMI  LAAS (France)
  • Brenna ARGALL   Northwestern (USA)
  • Marie BABEL   Inria (France)
  • Anca DRAGAN  UC Berkeley University (USA)
  • Rodolph GELIN  Aldebaran (France)
  • Dylan HADFIELD-MENELL  Berkeley University (USA)
  • Brad HAYES  MIT (USA)
  • Vince HAYWARD  ISIR (France)
  • Ross KNEPPER  Cornell University (USA)
  • Jean-Paul LAUMOND  LAAS (France)
  • Jean-Pierre MERLET  Inria (France)
  • Fawzi NASHASHIBI Inria (France)
  • Scott  NIEKUM  University of Texas (USA)
  • Jean PONCE  Inria/ENS (France)

Acknowledgments :

This workshop was supported in part by Inria and the ERC grant VideoWorld.

  • Rachid ALAMI CNRS (France)

Dr. Rachid Alami is Senior Scientist at CNRS. He received an engineer diploma in computer science in 1978 from ENSEEIHT, a Ph.D in Robotics in 1983 from Institut National Polytechnique and an Habilitation HDR in 1996 from Paul Sabatier University He contributed and took important responsibilities in several national, European and international research and/or collaborative projects (EUREKA: FAMOS, AMR and I-ARES projects, ESPRIT: MARTHA, PROMotion, ECLA, IST: COMETS, IST FP6 projects COGNIRON, URUS, PHRIENDS, and FP7 projects CHRIS, SAPHARI, ARCAS, SPENCER France: ARA, VAP-RISP for planetary rovers, PROMIP, ANR projects). His main research contributions fall in the fields of Robot Decisional and Control Architectures, Task and motion planning, multi-robot cooperation, and human-robot interaction. Rachid Alami is currently the head of the Robotics and InteractionS group at LAAS.

Title: On decisional abilities for a cognitive and interactive robot
 This talk addresses some key decisional issues that are necessary for a cognitive robot which shares space and tasks with a human. We adopt a constructive approach based on the identification and the effective implementation of individual and collaborative skills. The system is comprehensive since it aims at dealing with a complete set of abilities articulated so that the robot controller is effectively able to conduct in a flexible manner a human-robot collaborative problem solving and task achievement. These abilities include geometric reasoning and situation assessment based essentially on perspective-taking and affordances, management and exploitation of each agent (human and robot) knowledge in a separate cognitive model, human-aware task planning and interleaved execution of shared plans.

  • Brenna ARGALL   Northwestern (USA)

Brenna Argall is the June and Donald Brewer Junior Professor of Electrical Engineering & Computer Science at Northwestern University, and also an assistant professor in the Departments of Mechanical Engineering and Physical Medicine & Rehabilitation. Her research lies at the intersection of robotics, machine learning and human rehabilitation. She is director of the assistive & rehabilitation robotics laboratory (argallab) at the Rehabilitation Institute of Chicago (RIC), the premier rehabilitation hospital in the United States. The mission of the argallab is to advance human ability by leveraging robotics autonomy. Argall is a 2016 recipient of the NSF CAREER award, and her Ph.D. in Robotics (2009) was received from the Robotics Institute at Carnegie Mellon University, as well as her M.S. in Robotics (2006) and B.S. in Mathematics (2002). Prior to joining Northwestern and RIC, she was a postdoctoral fellow (2009-2011) at the École Polytechnique Fédérale de Lausanne (EPFL), and prior to graduate school she held a Computational Biology position at the National Institutes of Health (NIH).Title :  The Question of Control for Interactions with Assistive Robots

 It is a paradox that often the more severe a person's motor impairment, the more challenging it is for them to operate the very assistive machines which might enhance their quality of life. A primary aim of my lab is to address this confound by incorporating robotics autonomy and intelligence into assistive machines--- offloading some of the control burden from the user. The human-robot team in this case is a very particular one: the robot is physically supporting the human, and replacing or enhancing lost or diminished function. Getting the control sharing right is essential, and will be critical for the adoption of physically assistive robots within larger society. For reasons of end-user economics and established user validation, in my lab we narrow our focus to commercially available control interfaces already widely used to operate assistive machines like powered wheelchairs. Characteristics of these interfaces, as well as the motor-impairments of the human operators, are critical factors in how human end-users interact with the robotics autonomy. However, within the domain of assistive robotics generally and smart wheelchairs in particular, comparative study of control sharing paradigm and commercial control interface has been minimally addressed. This talk will focus on such a comparative study currently underway in my lab, and also on the question of control sharing that adapts over time with the user's preferences and abilities.

  • Marie BABEL Computer Science INSA Rennes, IRISA/Inria (France)

Marie BABEL . Within the IRISA/Inria lab, Marie?s research works tackle robotic vision issues, and more particularly assistive robotics. In this context, she actively participate to the Inria Large-scale initiative action PAL (Personally Assisted Living). In particular, she proposed semi-autonomous navigation solution of a robotized wheelchair with the help of dedicated vision embedded systems together with visual servoing frameworks. These works include visual feature detection and tracking. In addition, she was the leader of the APASH and HandiViz projects (2012-2015) that aimed at designing a driving assistance for wheelchair: the resulting technology is currently under transfer towards Ergovie Company (Rennes). Tests with disabled patients in the rehabilitation center P?le Saint H?lier (Rennes) are under progress and results prove the ability of the assistive system to smoothly correct the trajectory of the wheelchair in case of hazardous situations. Marie is currently leading the ISI4NAVE Inria Associated Team (collaboration with UCL London) and current research works are oriented towards multimodal sensor based servoing, as well as haptic feedback that leads to an intuitive assistive wheelchair navigation.

Title of the talk :
Wheelchair mobility assistance: enhancing the driving experience
As it is certain that intelligent robots are poised to address a growing number of issues in the service and medical care industries, it is important to resolve how users as well as other humans interact with such robots in order to accomplish common objectives. Particularly in the assistive intelligent wheelchair domain, preserving a sense of autonomy with the user is required, as individual agency is essential for his/her physical and social well being. Our work thus aims to globally characterize the idea of adaptivity within human-robot shared control while particularly devoting the attention to different mobility issues within the assistive wheelchair domain viz. vision-based corridors navigation assistance, reactive control for obstacle avoidance and human-aware motion generation, while considering biofeedback issues.

  • Anca DRAGAN Berkeley University (USA)

Anca Dragan is a an Assistant Professor at UC Berkeley in the EECS Department and runs the InterACT lab. She got her PhD at Carnegie Mellon in the Robotics Institute on planning intent-expressive motion. Her work is in algorithmic human-robot interaction, bridging robotics algorithms with HCI and cognitive science, and focusing on enabling robots to autonomously generate their behavior in a manner that accounts for interaction and coordination with people.

Title: How Robots Influence Our Actions
 Often times robots treat humans as obstacles that need to be avoided. But humans are approximately rational agents that plans and react to what happens in the world, and as a consequence robot actions influence what they do. We formulate interaction with humans as an under-actuated system and explore the consequences of different cost functions that the robot can optimize, showing more effective robots, robots that are better at coordinating with people, and robots that can help people overcome suboptimality.

  • Rodolph GELIN  Aldebaran (France)

Rodolphe Gelin  (1965) started his career at CEA (French Atomic Energy Commission), he has been working there for 10 years on mobile robots control for industrial applications and on rehabilitation robotics. Then he had been in charge of different teams working on robotics, virtual reality and cognitics. In 2009, he joined SoftBank Robotics as head of collaborative projects. He is the leader of the French project ROMEO that aims to develop a human size humanoid robot. Since 2016, he is Chief Scientist Officer at SoftBank Robotics.

Title:  The Human-Humanoid Interaction
Abstract :   A good quality of the human-robot interaction is a key issue for the acceptability of this new device in our everyday life. The humanoid shape that has been chosen by SoftBank Robotics makes things in a way easier but in another way much more difficult.  When the robot is humanoid, and nice looking, people want spontaneously to interact with it. But they also expect it to react like a human being, that is today not achievable. In my talk, I will try to explain the complex trade off we have to make between the human-machine interaction and the human-human interaction to reach the good human-humanoid interaction.

  • David  HADFIELD-MENELL Berkeley Univeristy (USA)

I'm a third year Ph.D. student at UC Berkeley, advised by Pieter Abbeel and Stuart Russell. My research focuses on applications of artificial intelligence methods to robotics. Recently, my work has focused on the value alignment problem: the problem of building an artificial intelligence whose goals are aligned with those of its designers. I'm also interested in hierarchical task and motion planning and general problems that arise from decision making under uncertainty with long horizons. I have also done some work on resource constrained scheduling and learning from demonstrations. Before coming to Berkeley, I did a Master's of Engineering with Leslie Kaelbling and Tomás Lozano-Pérez at MIT. When I'm not working on research I'm usually at a concert, reading a sci-fi or fantasy novels, or playing ultimate frisbee.

Title: Cooperative Inverse Reinforcement Learning: Human Robot Interaction as a Cooperative Game
 My talk will present Cooperative Inverse Reinforcement Learning (CIRL), a novel mathematical framework for human robot interaction. We model the world as a cooperative game between two players: a human and a robot; both are rewarded according to the human's reward function but the robot does not initially know what this is. This creates incentives for the robot to learn the human's preferences and in order to help maximize reward. CIRL also models the human's incentives to teach the robot and so our approach gives a principled definition of communicative (as opposed to expert) demonstration. I will present basic structual about the framework and discuss some initial experimental results.

  • Dr. Bradley HAYES Massachusetts Institute of Technology(USA)

Dr. Bradley Hayes is a Postdoctoral Associate in the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. Brad's research interests center around developing the algorithms necessary for building supportive, interactive, and intuitive robotic systems that are capable of performing complex collaborative tasks in environments shared with humans. His work combines learning from demonstration, intention recognition, human teaming psychology, activity modeling, and human-robot interaction. Brad received his Ph.D. in Computer Science from Yale University, where his thesis received the department's nomination for the 2015 ACM Dissertation Award. Brad is an organizer of this year's AAAI Fall Symposium on Artificial Intelligence for Human-Robot Interaction Symposium (AI-HRI 2016), and served as the general chair of both the 2015 HRI Workshop on Human-Robot Teaming and AI-HRI 2015. He is the recipient of the Boston College Accenture Award in Computer Science (2008), and is an RSJ/KROS Distinguished Interdisciplinary Research Award finalist (2014). His work has appeared on, Wired, the BBC, Popular Science, Discover Magazine, and the Boston Museum of Science.

Title: Autonomous Task Assistance via Human-Robot Collaboration: Learning and Synthesizing Supportive Behaviors
 Robots that are capable of fluent collaboration with humans are capable of revolutionizing a wide array of industries, ranging from health care to education to manufacturing. Particularly in domains where modern robots are ineffective, human-robot teaming can be leveraged to increase the efficiency, capability, and safety of people. Central to building these autonomous systems are the problems of task modeling, teammate goal inference, and multi-agent coordination, each of which can be extremely challenging without a priori task knowledge or behavioral models of one's collaborators. In this talk I will cover recent work toward developing robots that learn tasks from co-workers and assist in their completion, both learning and synthesizing supportive behaviors: actions that a collaborator can perform to facilitate teammates' task completion or comprehension.

  • Vincent HAYWARD  ISIR (France)

Vincent Hayward (Dr.-Ing., 1981 Univ. de Paris XI) was Postdoctoral Fellow  (1981-82) at Purdue University, and joined CNRS, France, as Chargé de Recherches in 1983. In 1987, he joined the Department of Electrical and Computer Engineering at McGill University as assistant, associate and then full professor (2006). He was the Director of the McGill Center for Intelligent Machines from 2001 to 2004 and held the "Chaire internationale d'haptique" at the Université Pierre et Marie Curie from 2008 to 2010. He is now Professor at UPMC.

Title:  Invariants and perceptual illusions in touch 
Abstract:  Touch, like the other senses, is partly goal directed collection of information from physical processes and partly computational processes. In touch, the conversion of physical signals into conscious percepts is today quite mysterious. The study of this process, at least in its early stages can be approached with the study of illusions and from the identification of the invariants that lurk behind them. Some of these invariants may even have clear neural correlates.

  • Ross KNEPPER  Cornell University (USA)

Ross A. Knepper is an Assistant Professor in the Department of Computer Science at Cornell University. His research focuses on the theory, algorithms, and mechanisms of automated collaborative assembly. Previously, Ross was a Research Scientist in the Distributed Robotics Lab at MIT. Ross received his M.S and Ph.D. degrees in Robotics from Carnegie Mellon University in 2007 and 2011. Before his graduate education, Ross worked in industry at Compaq, where he designed high-performance algorithms for scalable multiprocessor systems; and also in commercialization at the National Robotics Engineering Center, where he adapted robotics technologies for customers in government and industry. Ross has served as a volunteer for Interpretation at Death Valley National Park, California. 

Title:  Duality of Robot Actions in a Collaborative Context
Abstract:  In a social context, every action performed by a human or a robot has a dual nature, both functional and communicative.  This talk addresses the communicative aspect and describes how and why communication is encoded as a channel on top of functional actions. It serves to make collaborations more efficient and effective by exchanging crucial information about goals and capabilities. Interpreting the communicative aspects of an action requires an understanding of the action in the context where it occurred.  I give theory and examples in social navigation, natural language and gesture communication, and other domains.Jean-Paul LAUMOND  LAAS-CNRS, Toulouse

  • Jean-Paul LAUMOND   LAAS-CNRS, Toulouse

Jean-Paul Laumond , IEEE Fellow, is a roboticist. He is Directeur de Recherche at LAAS-CNRS (team Gepetto) in Toulouse, France. His research is devoted to robot motion.He research is devoted to robot motion planning and control. From 2000 to 2002, he created and managed Kineo CAM, a spin-off company from LAAS-CNRS devoted to develop and market motion planning technology in the field of virtual prototyping. Siemens acquired Kineo CAM in 2012. In 2006, he launched the research team Gepetto dedicated to Human Motion studies along three perspectives: artificial motion for humanoid robots, virtual motion for digital actors and mannequins, and natural motions of human beings. He teaches Robotics at Ecole Normale Supérieure in Paris. He publishes in Robotics, Computer Science, Automatic Control and recently in Neurosciences. He has been the 2011-2012 recipient of the Chaire Innovation technologique Liliane Bettencourt at Collège de France in Paris. His current project Actanthrope (ERC-ADG 340050) is devoted to the computational foundations of anthropomorphic action.

Title:  Humans are not walking, they are rolling!
Abstract :  The objective of the talk is to give sense to this abstruse statement. Indeed, the wheel appears to be a plausible model of bipedal walking. We report on preliminary results developed along three perspectives combining biomechanics, neurophysiology and robotics. From a motion capture data basis of human walkers we first show that goal oriented locomotion obeys the same nonholonomic laws as a rolling wheel. Making use of inverse optimal control techniques, we show how the geometric shape of the locomotor trajectories reveals the role of perception in the trajectory formation. In a second part of the talk the center of mass (CoM) is presented as a geometric center from which the motions of the feet are organized. Finally, we show how rimless wheels that model most passive robot walkers are better controlled when equipped with an articulated stabilized mass on top of them, i.e. when equipped with an articulated head. This suggests a top-down control of human walking at the opposite of the bottom-up control of most humanoid robots. 

  • Jean-Pierre MERLET  Inria (France)


J-P. Merlet received his Master in Mathematics in 1978, his Enginner title from the Ecole Centrale, Nantes in 1980, his
PhD in 1986 from Paris VI University and his Research habilitation from Nice University in 1996.
He has worked as an engineer in the food industry (Lu) and in civil engineering. He has worked as research engineer at the CEA (French nuclear Agency) and as research associate in Japan (Kyoto University, MEL, Tsukuba) and in Canada (McGill University, Montreal). He his now team leader of the HEPHAISTOS project of INRIA (French National Research Institute in Control Theory and Computer Science). HEPHAISTOS is a team with 12 members (PhD students, post-doc
and full time researchers) working in the field of assistance robotics for frail people. J-P. Merlet is the author of over 200 conference papers and over 60 journal papers in the field of force control of robots, algebraic geometry, constraints solving and parallel robots. He is usually recognized as one of the world leader in the domain of parallel
robots. He is an IEEE Fellow and IFToMM award of merits.
His current research interest are interval analysis, optimal design for mechanism and assistance robotics.

Title:  Learning human walking pattern with robots
Abstract :  Human walking analysis is a major element for assessing human state of health both on the functional and cognitive sides. Such an analysis is essential for frail people (elderly, handicapped and people in a rehabilitation process) and should also be performed at home for providing synthetic indicators for the medical community. Robots may be used for this assessment but strict guidelines must be followed with respect to privacy and access to data. We will present example of robotized devices that may realize such a task. One important aspect is that robotized devices are prone to uncertainties. As these devices must ensure the safety of the subject and as the indicators they calculate may be used for medical decisions and the funding of the subject assistance devices it is important to take these uncertainties into account in the design and exploitation of the devices. We will present how interval analysis may be used for that purpos

  • Fawzi NASHASHIBI Inria (France)


Dr. Fawzi Nashashibi, 49 years, is a senior researcher and the Program Manager of RITS Team at INRIA Paris since 2010. He has been senior researcher and Program Manager in the robotics center of Mines ParisTech since 1994 and was a project manager at ARMINES since May 2000. Fawzi Nashashibi obtained a Master’s Degree in Automation, Industrial Engineering and Signal Processing (LAAS/CNRS) in 1989, a PhD in Robotics from Toulouse University prepared in (LAAS/CNRS) laboratory in 1993, and a HDR Diploma (Accreditation to research supervision) from University of Pierre et Marie Curie (Paris 6) in 2005. He played key roles in more than 50 European and national French projects such as Carsense, HAVE-it, INTERSAFE, PICAV, FURBOT, CityMobil, ARCOS, ABV, LOVe, SPEEDCAM,... some of which he has coordinated. He is also involved in many collaborations with French and international academics and industrial partners. He is author of more than 150 publications and patents in the field of ITS and ADAS systems. His current interest focuses on advanced urban mobility through the design and development of highly Automated Transportation Systems. This includes Highly Automated Unmanned Guided Vehicles (e.g. Cybercars) as well automated personal vehicles. In this field he is known as an international expert. From 1994 to 2016 he was the thesis director and supervisor of 27 PhD theses, and he was a PhD and HDR jury member of 55 other French and international PhD thesis and HDR committees. He was also member of several laboratories evaluation committees and helped in the evaluation of French and European research projects. IEEE member, he is also member of the ITS Society and the Robotics & Automation Society. He is an Associate Editor of several IEEE international journal and conferences. He is member of several international research committees in the field of automated and connected vehicles.

Title: "Human-vehicle interaction: integrating driving behaviours in vehicle automation"
Abstract: In the last decades vehicle automation was seen as an extension to mobile robots autonomous navigation. From local perception to the final locomotion the decision system has been integrating the environment models, obstacles detection, goal information, motion planning and optimal control. Task and trajectories planning systems are used to take into account geometric representations, vehicle dynamics and other optimality criteria such as travel time optimization or fuel consumption reduction. The main difference between a mobile robot and an automated vehicles resides in the presence of a human in the loop. Thus, there is necessarily two main levels of interaction between the autonomous vehicle and the human: The first is that of sharing the driving task between the driver and the autopilot; the second that of the interaction between the autonomous vehicle and other road users, pedestrians and other drivers. Therefore, advanced planning systems must absolutely take into account these new considerations. These are expressed as two new research directions: the shared driving (known as the arbitration problem) and the cognitive driving which deals with the recognition of the road users and the interpretation and the prediction of their behaviours. In this talk we will tackle these two issues and discuss the new trends in human vehicle interactions for autonomous driving.

  • Scott  NIEKUM  University of Texas (USA)

Scott NIEKUM is an Assistant Professor and the director of the Personal Autonomous Robotics Lab (PeARL) in the Department of Computer Science at UT Austin.  He is also a core faculty member in the interdepartmental robotics group at UT.  Prior to joining UT Austin, Scott was a postdoctoral research fellow at the Carnegie Mellon Robotics Institute. He received his Ph.D. in September 2013 from the Department of Computer Science at the University of Massachusetts Amherst, working under the supervision of Andrew Barto.  His research interests include learning from demonstration, robotic manipulation, time-series analysis, and reinforcement learning.

Title:  Discovering Structure in Robotics Tasks via Demonstrations and Active Learning
Abstract :   Future co-robots in the home and workplace will require the ability to quickly characterize new tasks and environments without the intervention of expert engineers.  Human demonstrations and active learning can play complementary roles when learning complex, multi-step tasks in novel environments—demonstrations are a fast, natural way to broadly provide human insight into task structure and environmental dynamics, while active learning can fine-tune models by exploiting the robot’s knowledge of its own internal representations and uncertainties.

Using these complementary data sources, I will focus on three types of structure discovery that can help robots quickly produce robust control strategies for novel tasks: 1) learning high-level task descriptions from unstructured demonstrations, 2) inferring physics-based models of task goals and environmental dynamics from demonstrations, and 3) reducing uncertainty over models and state estimates via active learning and interactive perception.  These techniques draw from Bayesian nonparametrics, time series analysis, information theory, and control theory to characterize complex tasks like IKEA furniture assembly that challenge the state of the art in manipulation.

  • Jean PONCE Ecole Normale Superieure/PSL Research University (France)


Abstract: This presentation introduces new techniques for image matching using bottom-up region proposals, together with local and global geometric consistency constraints, and discusses their application to two problems: the fully unsupervised discovery of the "topological" structure of image and video datasets, using matching as a proxy for supervision in object discovery tasks; and the computation of dense scene flow, warping images of similar but different objects onto one another. Extensive comparative experiments with both standard and new benchmarks demonstrate the promise of the proposed approach.
Joint work with Minsu Cho, Bumsub Ham, Suha Kwak, Ivan Laptev and Cordelia Schmid.

Keywords: Human-Robot Workshop Interaction