Abstract
Intention-based Human-Robot Interaction (HRI) systems allow robots to perceive and interpret user actions to proactively interact with humans and adapt to their behavior. Therefore, intention prediction is pivotal in creating a natural interactive collaboration between humans and robots. In this paper, we examine the use of Large Language Models (LLMs) for inferring human intention during a collaborative object categorization task with a physical robot. We introduce a hierarchical approach for interpreting user non-verbal cues, like hand gestures, body poses, and facial expressions and combining them with environment states and user verbal cues captured using an existing Automatic Speech Recognition (ASR) system. Our evaluation demonstrates the potential of LLMs to interpret non-verbal cues and to combine them with their context-understanding capabilities and real-world knowledge to support intention prediction during human-robot interaction.
Abstract (translated)
基于意图的人机交互(HRI)系统允许机器人感知和解释用户的动作,从而主动与人类互动并适应其行为。因此,意图预测在创建人类与机器人之间自然互动的重要性不言而喻。在本文中,我们研究了使用大型语言模型(LLMs)在合作物体分类任务中推断人类意图的方法。我们引入了一种分层的解释用户非语言线索的方法,包括手势、身体姿势和面部表情,并将其与环境和用户口头线索捕获的现有自动语音识别(ASR)系统相结合。我们的评估表明,LLMs具有解释非语言线索的能力,并将其与上下文理解能力和现实世界的知识相结合,支持在人类与机器人交互过程中进行意图预测。
URL
https://arxiv.org/abs/2404.08424