Paper Reading AI Learner

LEO-VL: Towards 3D Vision-Language Generalists via Data Scaling with Efficient Representation

2025-06-11 16:56:34
Jiangyong Huang, Xiaojian Ma, Xiongkun Linghu, Yue Fan, Junchao He, Wenxin Tan, Qing Li, Song-Chun Zhu, Yixin Chen, Baoxiong Jia, Siyuan Huang

Abstract

Developing 3D-VL generalists capable of understanding 3D scenes and following natural language instructions to perform a wide range of tasks has been a long-standing goal in the 3D-VL community. Despite recent progress, 3D-VL models still lag behind their 2D counterparts in capability and robustness, falling short of the generalist standard. A key obstacle to developing 3D-VL generalists lies in data scalability, hindered by the lack of an efficient scene representation. We propose LEO-VL, a 3D-VL model built upon condensed feature grid (CFG), an efficient scene representation that bridges 2D perception and 3D spatial structure while significantly reducing token overhead. This efficiency unlocks large-scale training towards 3D-VL generalist, for which we curate over 700k high-quality 3D-VL data spanning four domains of real-world indoor scenes and five tasks such as captioning and dialogue. LEO-VL achieves state-of-the-art performance on a variety of 3D QA benchmarks, including SQA3D, MSQA, and Beacon3D. Ablation studies confirm the efficiency of our representation, the importance of task and scene diversity, and the validity of our data curation principle. Furthermore, we introduce SceneDPO, a novel post-training objective that enhances the robustness of 3D-VL models. We hope our findings contribute to the advancement of scalable and robust 3D-VL generalists.

Abstract (translated)

开发能够理解三维场景并根据自然语言指令执行广泛任务的三维视觉-语言(3D-VL)通才,一直是3D-VL社区长期追求的目标。尽管近期取得了进展,但3D-VL模型在能力和稳健性方面仍落后于其二维对应模型,并未能达到通才的标准。开发3D-VL通才的关键障碍在于数据规模的扩展问题,而这一问题又受到缺乏高效场景表示方法的限制。我们提出了LEO-VL,这是一个基于凝缩特征网格(CFG)构建的三维视觉-语言模型,CFG是一种高效的场景表示方式,它在桥接二维感知和三维空间结构的同时显著减少了令牌开销。这种效率为大规模训练3D-VL通才打开了大门,为此我们整理了超过70万条高质量的涵盖四个真实室内场景领域和五个任务(如描述生成和对话)的数据集。 LEO-VL在各种3D问答基准测试中均达到了最先进的性能水平,包括SQA3D、MSQA和Beacon3D。通过消融研究确认了我们表示法的有效性、任务与场景多样性的重要性以及数据整理原则的合理性。此外,我们引入了一种新颖的事后训练目标——SceneDPO,旨在增强3D-VL模型的稳健性。 希望我们的研究成果能够为开发可扩展和稳健的三维视觉-语言通才贡献一份力量。

URL

https://arxiv.org/abs/2506.09935

PDF

https://arxiv.org/pdf/2506.09935.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot