Abstract
The ability of a robot to pick an object, known as robot grasping, is crucial for several applications, such as assembly or sorting. In such tasks, selecting the right target to pick is as essential as inferring a correct configuration of the gripper. A common solution to this problem relies on semantic segmentation models, which often show poor generalization to unseen objects and require considerable time and massive data to be trained. To reduce the need for large datasets, some grasping pipelines exploit few-shot semantic segmentation models, which are capable of recognizing new classes given a few examples. However, this often comes at the cost of limited performance and fine-tuning is required to be effective in robot grasping scenarios. In this work, we propose to overcome all these limitations by combining the impressive generalization capability reached by foundation models with a high-performing few-shot classifier, working as a score function to select the segmentation that is closer to the support set. The proposed model is designed to be embedded in a grasp synthesis pipeline. The extensive experiments using one or five examples show that our novel approach overcomes existing performance limitations, improving the state of the art both in few-shot semantic segmentation on the Graspnet-1B (+10.5% mIoU) and Ocid-grasp (+1.6% AP) datasets, and real-world few-shot grasp synthesis (+21.7% grasp accuracy). The project page is available at: this https URL
Abstract (translated)
机器人抓取(Robot Grasping)的能力对于许多应用场景至关重要,如组装或分类。在这些任务中,选择正确的目标物体如同推断正确的爪子配置一样重要。解决这个问题的一种常见方法是基于语义分割模型的,这些模型通常对未见过的物体表现不佳,需要大量的时间和数据来进行训练。为了减少需要的大型数据集,一些抓取管道利用少样本语义分割模型,这些模型能够根据几个示例识别出新类别。然而,这往往需要在机器人抓取场景中进行细调才能产生有效的效果。在这项工作中,我们通过将基础模型令人印象深刻的泛化能力与高性能的少样本分类器相结合,实现了克服各种限制的目标,作为分数函数选择距离支持集更近的分割。所提出的模型旨在嵌入抓取合成管道中。使用一个或五个示例的广泛实验表明,我们新颖的方法超出了现有性能限制,提高了Grabnet-1B (+10.5% mIoU)和Ocid-grasp (+1.6% AP)数据集中的少样本语义分割的领先状态,以及现实生活中的少样本抓取合成 (+21.7% grasp accuracy)。项目页面可以在以下这个链接中访问:https://this URL。
URL
https://arxiv.org/abs/2404.12717