Abstract
There is a recent trend in the LiDAR perception field towards unifying multiple tasks in a single strong network with improved performance, as opposed to using separate networks for each task. In this paper, we introduce a new LiDAR multi-task learning paradigm based on the transformer. The proposed LiDARFormer utilizes cross-space global contextual feature information and exploits cross-task synergy to boost the performance of LiDAR perception tasks across multiple large-scale datasets and benchmarks. Our novel transformer-based framework includes a cross-space transformer module that learns attentive features between the 2D dense Bird's Eye View (BEV) and 3D sparse voxel feature maps. Additionally, we propose a transformer decoder for the segmentation task to dynamically adjust the learned features by leveraging the categorical feature representations. Furthermore, we combine the segmentation and detection features in a shared transformer decoder with cross-task attention layers to enhance and integrate the object-level and class-level features. LiDARFormer is evaluated on the large-scale nuScenes and the Waymo Open datasets for both 3D detection and semantic segmentation tasks, and it outperforms all previously published methods on both tasks. Notably, LiDARFormer achieves the state-of-the-art performance of 76.4% L2 mAPH and 74.3% NDS on the challenging Waymo and nuScenes detection benchmarks for a single model LiDAR-only method.
Abstract (translated)
最近的研究表明,在激光雷达感知领域,趋势是将多个任务统一到一个强大的网络中,以提高性能,而不是每个任务使用单独的网络。在本文中,我们介绍了基于Transformer的激光雷达多任务学习范式。我们提议的激光雷达前体使用跨空间全局特征信息,利用跨任务协同作用来提高多个大规模数据集和基准的性能。我们的新型Transformer框架包括一个跨空间Transformer模块,用于学习2D密集鸟眼视图(BEV)和3D稀疏立方点特征映射的注意特征。我们还提议了一个分割任务Transformer解码器,通过利用分类特征表示动态地调整学习的特征。此外,我们将分割和检测特征在共享的Transformer解码器中与跨任务注意力层组合在一起,以增强和整合对象级和类级特征。激光雷达前体在大型无物体场景和Waymo开放数据集上,以及3D检测和语义分割任务中的两个任务上的大规模基准数据集上进行了评估,并在两个任务上优于所有先前发布的方法。特别是,激光雷达前体在Waymo和无物体场景检测基准数据集上实现了最先进的76.4%L2mAP和74.3%NDS性能。
URL
https://arxiv.org/abs/2303.12194