Paper Reading AI Learner

A Point-Based Approach to Efficient LiDAR Multi-Task Perception

2024-04-19 11:24:34
Christopher Lang, Alexander Braun, Lars Schillingmann, Abhinav Valada

Abstract

Multi-task networks can potentially improve performance and computational efficiency compared to single-task networks, facilitating online deployment. However, current multi-task architectures in point cloud perception combine multiple task-specific point cloud representations, each requiring a separate feature encoder and making the network structures bulky and slow. We propose PAttFormer, an efficient multi-task architecture for joint semantic segmentation and object detection in point clouds that only relies on a point-based representation. The network builds on transformer-based feature encoders using neighborhood attention and grid-pooling and a query-based detection decoder using a novel 3D deformable-attention detection head design. Unlike other LiDAR-based multi-task architectures, our proposed PAttFormer does not require separate feature encoders for multiple task-specific point cloud representations, resulting in a network that is 3x smaller and 1.4x faster while achieving competitive performance on the nuScenes and KITTI benchmarks for autonomous driving perception. Our extensive evaluations show substantial gains from multi-task learning, improving LiDAR semantic segmentation by +1.7% in mIou and 3D object detection by +1.7% in mAP on the nuScenes benchmark compared to the single-task models.

Abstract (translated)

多任务网络相对于单任务网络可能会提高性能和计算效率,从而实现在线部署。然而,当前的点云感知中的多任务架构结合了多个任务特定的点云表示,每个表示都需要单独的特征编码器,使得网络结构变得庞大且运行缓慢。我们提出了PAttFormer,一种仅依赖于点基表示的多任务架构,用于联合语义分割和目标检测。该网络基于基于Transformer的特征编码器使用邻域注意力和池化,以及一种新颖的3D可形变注意检测头设计,用于查询基检测器。与其它基于激光雷达的多任务架构不同,我们的PAttFormer不需要为多个任务特定的点云表示分别编写单独的特征编码器,导致网络规模减小了3倍,同时速度提高了1.4倍,在nuScenes和KITTI基准测试中实现了与单任务模型相当竞争力的性能。我们进行的全面评估显示,多任务学习带来了显著的提高,在nuScenes基准测试中,LIDAR语义分割提高了+1.7%,而在3D目标检测中,提高了+1.7%。

URL

https://arxiv.org/abs/2404.12798

PDF

https://arxiv.org/pdf/2404.12798.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot