Abstract
In this work, we present a multimodal system for active robot-object interaction using laser-based SLAM, RGBD images, and contact sensors. In the object manipulation task, the robot adjusts its initial pose with respect to obstacles and target objects through RGBD data so it can perform object grasping in different configuration spaces while avoiding collisions, and updates the information related to the last steps of the manipulation process using the contact sensors in its hand. We perform a series of experiment to evaluate the performance of the proposed system following the the RoboCup2018 international competition regulations. We compare our approach with a number of baselines, namely a no-feedback method and visual-only and tactile-only feedback methods, where our proposed visual-and-tactile feedback method performs best.
Abstract (translated)
在这项工作中,我们提出了一个多模式系统,用于使用基于激光的SLAM,RGBD图像和接触传感器进行主动机器人 - 物体交互。在对象操作任务中,机器人通过RGBD数据调整其相对于障碍物和目标对象的初始姿势,以便它可以在避免碰撞的同时在不同的配置空间中执行对象抓取,并使用使用操作过程的最后步骤更新与之相关的信息。手中的接触传感器。我们按照RoboCup2018国际竞争法规进行了一系列实验,以评估所提出系统的性能。我们将我们的方法与许多基线进行比较,即无反馈方法和仅视觉和仅触觉反馈方法,其中我们提出的视觉和触觉反馈方法表现最佳。
URL
https://arxiv.org/abs/1809.03216