Abstract
In this work, we aim to enable legged robots to learn how to interpret human social cues and produce appropriate behaviors through physical human guidance. However, learning through physical engagement can place a heavy burden on users when the process requires large amounts of human-provided data. To address this, we propose a human-in-the-loop framework that enables robots to acquire navigational behaviors in a data-efficient manner and to be controlled via multimodal natural human inputs, specifically gestural and verbal commands. We reconstruct interaction scenes using a physics-based simulation and aggregate data to mitigate distributional shifts arising from limited demonstration data. Our progressive goal cueing strategy adaptively feeds appropriate commands and navigation goals during training, leading to more accurate navigation and stronger alignment between human input and robot behavior. We evaluate our framework across six real-world agile navigation scenarios, including jumping over or avoiding obstacles. Our experimental results show that our proposed method succeeds in almost all trials across these scenarios, achieving a 97.15% task success rate with less than 1 hour of demonstration data in total.
Abstract (translated)
在这项工作中,我们旨在使腿部机器人通过物理人类指导学会解读人类社交线索并产生适当的行为。然而,通过身体互动进行学习可能会给用户带来沉重的负担,特别是当过程需要大量的人类提供的数据时。为了解决这个问题,我们提出了一种人机交互框架,使得机器人能够在数据高效的方式下获取导航行为,并能够接受多模态自然人类输入(具体来说是手势和口头命令)。我们使用基于物理的模拟重建互动场景,并汇总数据以缓解由于演示数据有限而产生的分布变化。我们的渐进式目标提示策略在训练过程中适应性地提供适当的指令和导航目标,从而导致更准确的导航以及人机交互与机器人行为之间更强的一致性。 我们在六个现实世界中的敏捷导航场景中评估了该框架,包括跳跃或避开障碍物的情况。实验结果显示,在这些场景下,我们的方法几乎都在所有试验中取得成功,并且在总演示数据不到1小时的情况下达到了97.15%的任务成功率。
URL
https://arxiv.org/abs/2601.08422