Paper Reading AI Learner

Generalized Label-Efficient 3D Scene Parsing via Hierarchical Feature Aligned Pre-Training and Region-Aware Fine-tuning

2023-12-01 15:47:04
Kangcheng Liu, Yong-Jin Liu, Kai Tang, Ming Liu, Baoquan Chen

Abstract

Deep neural network models have achieved remarkable progress in 3D scene understanding while trained in the closed-set setting and with full labels. However, the major bottleneck for current 3D recognition approaches is that they do not have the capacity to recognize any unseen novel classes beyond the training categories in diverse kinds of real-world applications. In the meantime, current state-of-the-art 3D scene understanding approaches primarily require high-quality labels to train neural networks, which merely perform well in a fully supervised manner. This work presents a generalized and simple framework for dealing with 3D scene understanding when the labeled scenes are quite limited. To extract knowledge for novel categories from the pre-trained vision-language models, we propose a hierarchical feature-aligned pre-training and knowledge distillation strategy to extract and distill meaningful information from large-scale vision-language models, which helps benefit the open-vocabulary scene understanding tasks. To leverage the boundary information, we propose a novel energy-based loss with boundary awareness benefiting from the region-level boundary predictions. To encourage latent instance discrimination and to guarantee efficiency, we propose the unsupervised region-level semantic contrastive learning scheme for point clouds, using confident predictions of the neural network to discriminate the intermediate feature embeddings at multiple stages. Extensive experiments with both indoor and outdoor scenes demonstrated the effectiveness of our approach in both data-efficient learning and open-world few-shot learning. All codes, models, and data are made publicly available at: this https URL.

Abstract (translated)

深度神经网络模型在关闭设置和完整标签的情况下训练在3D场景理解方面取得了显著的进步。然而,当前的3D识别方法的主要瓶颈是,它们无法识别任何未在训练类别之外的新颖类别的现实世界应用。与此同时,最先进的3D场景理解方法主要需要高质量的标签来训练神经网络,而仅仅在完全监督的方式下表现良好。本文提出了一种处理有限标注场景的通用的简单框架。为了从预训练的视觉-语言模型中提取知识,我们提出了一种层次特征对齐的预训练和知识蒸馏策略,以提取和蒸馏大规模视觉-语言模型中的有意义的信息,从而帮助解决开箱见光的场景理解任务。为了利用边界信息,我们提出了一种基于能量的损失函数,其中边界感知使区域级别边界预测受益。为了鼓励潜在实例区分,并确保效率,我们提出了一种自监督的区域级别语义对比学习方案,基于神经网络的自信预测来区分中间特征嵌入的多阶段。在室内和室外场景的广泛实验中,我们的方法在数据有效的学习和开放世界少数样本学习方面都取得了显著的效果。所有代码、模型和数据都公开发布在以下这个链接上:https://this URL。

URL

https://arxiv.org/abs/2312.00663

PDF

https://arxiv.org/pdf/2312.00663.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot