As humanoid robots transition from labs to real-world environments, it is essential to democratize robot control for non-expert users. Recent human-robot imitation algorithms focus on following a reference human motion with high precision, but they are susceptible to the quality of the reference motion and require the human operator to simplify its movements to match the robot's capabilities. Instead, we consider that the robot should understand and adapt the reference motion to its own abilities, facilitating the operator's task. For that, we introduce a deep-learning model that anticipates the robot's performance when imitating a given reference. Then, our system can generate multiple references given a high-level task command, assign a score to each of them, and select the best reference to achieve the desired robot behavior. Our Self-AWare model (SAW) ranks potential robot behaviors based on various criteria, such as fall likelihood, adherence to the reference motion, and smoothness. We integrate advanced motion generation, robot control, and SAW in one unique system, ensuring optimal robot behavior for any task command. For instance, SAW can anticipate falls with 99.29% accuracy.
Optimizing robot behavior control from high-level task commands using self-awareness. Given an instruction by an operator, the robot generates multiple potential behaviors to accomplish the task, evaluates them based on its capabilities and limits, and selects the most suitable one to execute. This work introduces a motion adaptation with a Self-AWare model (SAW) to anticipate how well a robot can follow a given reference by ranking multiple potential behaviors and choosing the optimal one. For instance, in a scenario with three potential actions—walking, running, and jumping—the robot assesses each option and determines that walking is the most appropriate. Consequently, the robot walks to the person and says, "Hi." In this image, the generated references are shown by orange robots, while the robot's behavior when attempting to follow them is depicted in blue. Note that the reference motions are fixed just for visualizations, and do not consider gravity.
@INPROCEEDINGS{esteve2024selfaware,
author={Valls Mascaro, Esteve and Lee, Dongheui},
booktitle={2024 IEEE-RAS 23nd International Conference on Humanoid Robots (Humanoids)},
title={Know your limits! Optimize the behavior of bipedal robots through self-awareness},
year={2024},
volume={},
number={},
pages={1-8},
keywords=Imitation Learning; Legged Robots; Intention Recognition }