Abstract
The programming of robotic assembly tasks is a key component in manufacturing and automation. Force-sensitive assembly, however, often requires reactive strategies to handle slight changes in positioning and unforeseen part jamming. Learning such strategies from human performance is a promising approach, but faces two common challenges: the handling of low part clearances which is difficult to capture from demonstrations and learning intuitive strategies offline without access to the real hardware. We address these two challenges by learning probabilistic force strategies from data that are easily acquired offline in a robot-less simulation from human demonstrations with a joystick. We combine a Long Short Term Memory (LSTM) and a Mixture Density Network (MDN) to model human-inspired behavior in such a way that the learned strategies transfer easily onto real hardware. The experiments show a UR10e robot that completes a plastic assembly with clearances of less than 100 micrometers whose strategies were solely demonstrated in simulation.
Abstract (translated)
机器人组装任务编程是制造业和自动化的关键组件。然而,对于具有微弱位置变化和意外部分阻塞的特性的组装任务,常常需要反应性策略来处理。从人类表演中学习这些策略是一个有前途的方法,但面临两个常见的挑战:处理难以从演示中捕捉的低部件间隙,以及在没有机器人的情况下在没有访问实际硬件的情况下学习直觉策略。我们通过从容易在离线模拟中获取的数据中学习概率性的力量策略,解决这两个挑战。我们结合了长短期记忆(LSTM)和混合密度网络(MDN),以模型人类启发性行为,以便 learned 策略可以轻松地转移到实际硬件上。实验展示了一个UR10e机器人完成小于100微米的塑料装配,其策略仅通过模拟演示得到。
URL
https://arxiv.org/abs/2303.12440