Paper Reading AI Learner

DOrA: 3D Visual Grounding with Order-Aware Referring

2024-03-25 08:31:14
Tung-Yu Wu, Sheng-Yu Huang, Yu-Chiang Frank Wang

Abstract

3D visual grounding aims to identify the target object within a 3D point cloud scene referred to by a natural language description. While previous works attempt to exploit the verbo-visual relation with proposed cross-modal transformers, unstructured natural utterances and scattered objects might lead to undesirable performances. In this paper, we introduce DOrA, a novel 3D visual grounding framework with Order-Aware referring. DOrA is designed to leverage Large Language Models (LLMs) to parse language description, suggesting a referential order of anchor objects. Such ordered anchor objects allow DOrA to update visual features and locate the target object during the grounding process. Experimental results on the NR3D and ScanRefer datasets demonstrate our superiority in both low-resource and full-data scenarios. In particular, DOrA surpasses current state-of-the-art frameworks by 9.3% and 7.8% grounding accuracy under 1% data and 10% data settings, respectively.

Abstract (translated)

3D视觉 grounded 旨在通过自然语言描述中的目标对象,在3D点云场景中确定目标对象。然而,以前的工作试图利用所提出的跨模态变换器利用动词-视觉关系,但无结构的自然语句和分散的对象可能会导致不良的性能。在本文中,我们引入了DOrA,一种新颖的3D视觉 grounded 框架,具有Order-Aware参考。DOrA旨在利用大型语言模型(LLMs)解析语言描述,建议锚对象之间的参照顺序。这样的有序锚对象允许DOrA在 grounding 过程中更新视觉特征并定位目标对象。在NR3D和ScanRefer数据集上的实验结果证实了我们在低资源和高资源场景中的卓越性。特别地,DOrA在1%数据和10%数据设置下的grounding准确度分别比现有最先进框架高9.3%和7.8%。

URL

https://arxiv.org/abs/2403.16539

PDF

https://arxiv.org/pdf/2403.16539.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot