Paper Reading AI Learner

Show and Grasp: Few-shot Semantic Segmentation for Robot Grasping through Zero-shot Foundation Models

2024-04-19 08:58:52
Leonardo Barcellona, Alberto Bacchin, Matteo Terreran, Emanuele Menegatti, Stefano Ghidoni

Abstract

The ability of a robot to pick an object, known as robot grasping, is crucial for several applications, such as assembly or sorting. In such tasks, selecting the right target to pick is as essential as inferring a correct configuration of the gripper. A common solution to this problem relies on semantic segmentation models, which often show poor generalization to unseen objects and require considerable time and massive data to be trained. To reduce the need for large datasets, some grasping pipelines exploit few-shot semantic segmentation models, which are capable of recognizing new classes given a few examples. However, this often comes at the cost of limited performance and fine-tuning is required to be effective in robot grasping scenarios. In this work, we propose to overcome all these limitations by combining the impressive generalization capability reached by foundation models with a high-performing few-shot classifier, working as a score function to select the segmentation that is closer to the support set. The proposed model is designed to be embedded in a grasp synthesis pipeline. The extensive experiments using one or five examples show that our novel approach overcomes existing performance limitations, improving the state of the art both in few-shot semantic segmentation on the Graspnet-1B (+10.5% mIoU) and Ocid-grasp (+1.6% AP) datasets, and real-world few-shot grasp synthesis (+21.7% grasp accuracy). The project page is available at: this https URL

Abstract (translated)

机器人抓取(Robot Grasping)的能力对于许多应用场景至关重要,如组装或分类。在这些任务中,选择正确的目标物体如同推断正确的爪子配置一样重要。解决这个问题的一种常见方法是基于语义分割模型的,这些模型通常对未见过的物体表现不佳,需要大量的时间和数据来进行训练。为了减少需要的大型数据集,一些抓取管道利用少样本语义分割模型,这些模型能够根据几个示例识别出新类别。然而,这往往需要在机器人抓取场景中进行细调才能产生有效的效果。在这项工作中,我们通过将基础模型令人印象深刻的泛化能力与高性能的少样本分类器相结合,实现了克服各种限制的目标,作为分数函数选择距离支持集更近的分割。所提出的模型旨在嵌入抓取合成管道中。使用一个或五个示例的广泛实验表明,我们新颖的方法超出了现有性能限制,提高了Grabnet-1B (+10.5% mIoU)和Ocid-grasp (+1.6% AP)数据集中的少样本语义分割的领先状态,以及现实生活中的少样本抓取合成 (+21.7% grasp accuracy)。项目页面可以在以下这个链接中访问:https://this URL。

URL

https://arxiv.org/abs/2404.12717

PDF

https://arxiv.org/pdf/2404.12717.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot