Robotics

HUCEBOT: the project team looking to make robots indispensable

Date:

Changed on 21/10/2025

Imagine a humanoid robot alongside a human, capable of responding to voice commands and providing the right assistance. This might sound like something out of science fiction, but HUCEBOT, a new project team at the University of Lorraine Inria Centre, wants to making it a reality through a combination of AI, robotics and simulation of human movement.
© Inria / Photo F. Nussbaumer - Signatures

Robots interacting with humans

Autonomous robots working alone in an industrial setting, that’s already been done”, says Serena Ivaldi, Inria director of research. “What I’m interested in is robots assisting humans at their work and needing to be able to interact with them.” To focus on this goal, Ivaldi decided to head up HUCEBOT (*) (HUman-CEntered roBOTics), a new project team launched on 1st August in the footsteps of Larsen. “All of our expertise is complementary and any advances in one direction will help others within the team.”

Developing multifunctional exoskeletons

Ivaldi and her colleagues will focus on two specific types of machines: exoskeletons and humanoid robots. Their goal will be to create artificial intelligence (AI) algorithms that will enable these machines to assist humans effectively, either by replacing them in dangerous situations or in difficult-to-access or remote locations, or by taking some of the physical strain, particularly when it comes to repetitive or demanding tasks. But overcoming the numerous scientific challenges in pursuit of this goal is not going to be easy. 

Assessing the real benefit of passive exoskeletons (which provide mechanical assistance but which are not motorised) will be key. “These machines are cheaper than active exoskeletons, but can only be used for specific postures. This raises questions as to their suitability and cost-effectiveness” Serena Ivaldi explains. This was explored in the Exoturn project in collaboration with Nancy’s University Hospital, in which a passive exoskeleton was used to help hospital staff turn patients in intensive care; upon completion of this project, the team studied if the same device could potentially assist nurses in bed bathing. “We realised that this exoskeleton is only useful a fraction of the time, and that it might be better to use a different type or design one that is better adapted”, the researcher elaborates. The project then continued under a new name, ExoSim, with a new objective: to create a database of tasks performed by hospital staff in order to simulate the benefits of different exoskeletons with regard to their health, including the risk of musculoskeletal disorders.

Active exoskeletons (which have motorised components) could be more effective when it comes to taking the strain off humans, but better controllers are required. “Current exoskeletons are rarely of assistance at the optimal time as they are unable to understand what the human is doing and either start working too early or too late, running the risk of injury", Serena Ivaldi explains. "Not only that, but generally they are designed to assist one specific limb or posture. Our goal is to develop modular exoskeletons with the potential to assist different parts of the body.” This is ambitious: such a robot will need to be guided by algorithms capable of anticipating human movement in order to deliver the right assistance at the right time - no mean feat given the variability and complexity of human movement. Studying and simulating human movement, particularly variability between individuals - which will be one of the main areas of focus for HUCEBOT - will prove extremely useful in this regard.

Agile robots capable of learning from humans

In addition to humanoid robots, whether autonomous or remote-controlled, the team will also be developing algorithms aimed at improving the agility of these robots, enabling them to carry out as many tasks as possible at the same speed as humans, and alongside humans. The researchers have been exploring a number of different scenarios, including service at a cafeteria, deliveries within buildings, and handling objects in logistics and home assistance. “We need to design control and learning algorithms different from those currently available, which are too slow, in addition to devising a new way for humans to interact with the platforms. This will require new control & AI techniques that allow robots to learn data demonstrated by humans and to improve their performance by interacting with humans.”

Two French National Research Agency (ANR) projects, Merlin and Ostensive, will assist them in pursuit of this goal. The aim of the former is to design learning and control techniques for highly agile robots capable of walking quickly or performing small jumps, while the latter will seek to develop the capacity of robots to generate agile but legible movements. Humans have specific ways of performing motions that people will naturally identify; robots must be able to do the same in order for their actions to also be understood - and anticipated - by human observers or partners.

Incorporating voice interaction

The team also aims at improving “full-body” and multi-contact control for robots, i.e. their capacity to move around in any environment using their whole bodies. “What we want to do is to change how humans program the contacts and motions performed by humanoid robots by incorporating voice interaction.” The ultimate aim is for operators to be able to tell a robot to put its hand on a table or take a book from a shelf, and for the robot to understand the instruction and the context and determine the optimal points of contact to perform the task. The EU project euROBIN has made breakthroughs in this field and is planning to hold a demonstration on natural language human/robot interaction at the European Parliament in 2026.

Commands using natural language are another aspect that Serena Ivaldi is passionate about and which she will be looking to develop with HUCEBOT. “It’s like magic. I firmly believe that it will totally change how we interact with robots: gone will be the days of complicated interfaces or buttons requiring the presence of a trained operator. Anyone will be able to use them.”

Working safely with robots

The team will tackle two points that are crucial to the roll-out of robots: their intuitive understanding of context and human safety. Where should a robot go when the human is moving? What should they do? HUCEBOT will be seeking to answer these questions through a number of projects. 

One such project, developed as part of the National research programme (PEPR) O2R, involves combining robotics and the social sciences & humanities in order to study issues such as trust in robots and the acceptance by humans of errors made by robots. A robot has been stationed in the research centre cafeteria for this purpose: with a psychologist from Paris 8 University, the researchers will collect data on how people approach the robot. “This will help us to determine which interactions to focus on. Long-term, we hope to get it serving coffee!”

Finally, through the Enact cluster, where Ivaldi has a chair in robotics and AI, the researchers will look to develop secure control techniques using natural language instructions. “If a robot does something it shouldn’t, it has to be able either to understand that the user is telling them this and stop what it is doing or correct itself. It also has to be able to translate human instructions, even when they aren't clear”, Serena Ivaldi sums up. HUCEBOT is seeking to lay the foundations for natural, fluid and secure interaction between humans and robots.

(*) The project team HUCEBOT is a joint venture between the CNRS, Inria and the University of Lorraine based at the University of Lorraine Inria Centre and the Lorraine Laboratory for Research in Computer Science and its Applications (CNRS/University of Lorraine).

A partnership with London

At an international level, HUCEBOT is collaborating with the robotics team at University College London (UCL) as part of the research project LEG-AI (Learning and Generative AI methods for Control of Legged Robots). The two teams have much in common, from skills to an interest in controlling agile robots using AI and reinforcement learning. The goal is to drive research forward on multiple fronts. This project is central to the Inria-UCL collaboration and is an integral part of Inria's international policy as it seeks to strengthen strategic international partnerships.

Key dates in Serena Ivaldi's career

  • 2011: Completes a PhD in Humanoid Technology at the Italian Institute of Technology in Genoa.
  • 2012-2014: Postdoctoral researcher at the Institute of Intelligent Systems and Robotics at Sorbonne University in Paris, then at the Intelligent Autonomous Systems Laboratory at the University of Darmstadt (Germany).
  • 2014: Recruited to join the Inria Nancy - Grand Est centre as part of the Maia project team (subsequently Larsen)
  • 2020: Awarded the Suzanne Zivi Prize for research excellence.
  • 2023: Expert visit at the European Space Agency in the Netherlands to work on the remote control of rovers.
  • 2024: General chair of IEEE-RAS HUMANOIDS 2024, an international conference on humanoid robots.
  • 2025: Launches the HUCEBOT project team.

A multidisciplinary team

Aside from Serena Ivaldi, HUCEBOT’s members include: 

  • Jean-Baptiste Mouret, Inria Senior research scientist, specialist in AI with a particular focus on machine learning applied to robots
  • Pauline Maurice, CNRS research scientist, expert in the modelling, analysis and simulation of human movement
  • Enrico Mingo Hoffman, Inria ISFP, a researcher in full-body control for multi-contact movement and locomotion
  • Guillaume Bellegarda, whose research into quadrupedal robots and locomotion combines control and learning techniques.