Paper Reading AI Learner

LiDARFormer: A Unified Transformer-based Multi-task Network for LiDAR Perception

2023-03-21 20:52:02
Zixiang Zhou, Dongqiangzi Ye, Weijia Chen, Yufei Xie, Yu Wang, Panqu Wang, Hassan Foroosh

Abstract

There is a recent trend in the LiDAR perception field towards unifying multiple tasks in a single strong network with improved performance, as opposed to using separate networks for each task. In this paper, we introduce a new LiDAR multi-task learning paradigm based on the transformer. The proposed LiDARFormer utilizes cross-space global contextual feature information and exploits cross-task synergy to boost the performance of LiDAR perception tasks across multiple large-scale datasets and benchmarks. Our novel transformer-based framework includes a cross-space transformer module that learns attentive features between the 2D dense Bird's Eye View (BEV) and 3D sparse voxel feature maps. Additionally, we propose a transformer decoder for the segmentation task to dynamically adjust the learned features by leveraging the categorical feature representations. Furthermore, we combine the segmentation and detection features in a shared transformer decoder with cross-task attention layers to enhance and integrate the object-level and class-level features. LiDARFormer is evaluated on the large-scale nuScenes and the Waymo Open datasets for both 3D detection and semantic segmentation tasks, and it outperforms all previously published methods on both tasks. Notably, LiDARFormer achieves the state-of-the-art performance of 76.4% L2 mAPH and 74.3% NDS on the challenging Waymo and nuScenes detection benchmarks for a single model LiDAR-only method.

Abstract (translated)

最近的研究表明,在激光雷达感知领域,趋势是将多个任务统一到一个强大的网络中,以提高性能,而不是每个任务使用单独的网络。在本文中,我们介绍了基于Transformer的激光雷达多任务学习范式。我们提议的激光雷达前体使用跨空间全局特征信息,利用跨任务协同作用来提高多个大规模数据集和基准的性能。我们的新型Transformer框架包括一个跨空间Transformer模块,用于学习2D密集鸟眼视图(BEV)和3D稀疏立方点特征映射的注意特征。我们还提议了一个分割任务Transformer解码器,通过利用分类特征表示动态地调整学习的特征。此外,我们将分割和检测特征在共享的Transformer解码器中与跨任务注意力层组合在一起,以增强和整合对象级和类级特征。激光雷达前体在大型无物体场景和Waymo开放数据集上,以及3D检测和语义分割任务中的两个任务上的大规模基准数据集上进行了评估,并在两个任务上优于所有先前发布的方法。特别是,激光雷达前体在Waymo和无物体场景检测基准数据集上实现了最先进的76.4%L2mAP和74.3%NDS性能。

URL

https://arxiv.org/abs/2303.12194

PDF

https://arxiv.org/pdf/2303.12194.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot