Abstract
Collaborative robots became a popular tool for increasing productivity in partly automated manufacturing plants. Intuitive robot teaching methods are required to quickly and flexibly adapt the robot programs to new tasks. Gestures have an essential role in human communication. However, in human-robot-interaction scenarios, gesture-based user interfaces are so far used rarely, and if they employ a one-to-one mapping of gestures to robot control variables. In this paper, we propose a method that infers the user's intent based on gesture episodes, the context of the situation, and common sense. The approach is evaluated in a simulated table-top manipulation setting. We conduct deterministic experiments with simulated users and show that the system can even handle personal preferences of each user.
Abstract (translated)
协作机器人在部分自动化的工厂中成为了提高生产力的流行工具。需要有一种直觉的机器人教学方法,快速和灵活地适应机器人程序到新任务。手势在人类沟通中扮演着至关重要的角色。然而,在人机互动场景中,基于手势的用户界面迄今为止使用得非常少,如果它们将手势映射到机器人控制变量的唯一对应关系。在本文中,我们提出了一种方法,基于手势事件、情境上下文和常识来推断用户的意图。该方法在模拟桌面上进行评估。我们与模拟用户进行确定性实验,并表明系统可以处理每个用户的个人偏好。
URL
https://arxiv.org/abs/2301.09899