Paper Reading AI Learner

Rigidity-Aware Detection for 6D Object Pose Estimation

2023-03-22 09:02:54
Yang Hai, Rui Song, Jiaojiao Li, Mathieu Salzmann, Yinlin Hu

Abstract

Most recent 6D object pose estimation methods first use object detection to obtain 2D bounding boxes before actually regressing the pose. However, the general object detection methods they use are ill-suited to handle cluttered scenes, thus producing poor initialization to the subsequent pose network. To address this, we propose a rigidity-aware detection method exploiting the fact that, in 6D pose estimation, the target objects are rigid. This lets us introduce an approach to sampling positive object regions from the entire visible object area during training, instead of naively drawing samples from the bounding box center where the object might be occluded. As such, every visible object part can contribute to the final bounding box prediction, yielding better detection robustness. Key to the success of our approach is a visibility map, which we propose to build using a minimum barrier distance between every pixel in the bounding box and the box boundary. Our results on seven challenging 6D pose estimation datasets evidence that our method outperforms general detection frameworks by a large margin. Furthermore, combined with a pose regression network, we obtain state-of-the-art pose estimation results on the challenging BOP benchmark.

Abstract (translated)

最新的6D对象姿态估计方法首先使用对象检测来获取2D边界框,然后在实际上进行姿态回归。然而,他们使用的通用对象检测方法不适合处理复杂场景,因此导致后续姿态网络初始化不良。为了解决这个问题,我们提出了一种Rigidity-aware检测方法,利用6D对象姿态估计中目标对象的Rigidity。这让我们引入一种方法,在训练期间从整个可见物体区域采样积极目标区域,而不是天真地从边界框中心处采样,因为对象可能会被 occlusion。因此,每个可见物体部分都可以贡献最终边界框预测,产生更好的检测稳定性。我们方法成功的关键是可见度地图,我们提议使用每个像素在边界框和边界框之间的最大障碍距离构建。我们在七个具有挑战性的6D对象姿态估计数据集上的结果证明,我们的方法比通用检测框架表现更好。此外,与姿态回归网络结合,我们获得了在具有挑战性的BOP基准架上的最先进的姿态估计结果。

URL

https://arxiv.org/abs/2303.12396

PDF

https://arxiv.org/pdf/2303.12396.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot