Abstract
Multi-task networks can potentially improve performance and computational efficiency compared to single-task networks, facilitating online deployment. However, current multi-task architectures in point cloud perception combine multiple task-specific point cloud representations, each requiring a separate feature encoder and making the network structures bulky and slow. We propose PAttFormer, an efficient multi-task architecture for joint semantic segmentation and object detection in point clouds that only relies on a point-based representation. The network builds on transformer-based feature encoders using neighborhood attention and grid-pooling and a query-based detection decoder using a novel 3D deformable-attention detection head design. Unlike other LiDAR-based multi-task architectures, our proposed PAttFormer does not require separate feature encoders for multiple task-specific point cloud representations, resulting in a network that is 3x smaller and 1.4x faster while achieving competitive performance on the nuScenes and KITTI benchmarks for autonomous driving perception. Our extensive evaluations show substantial gains from multi-task learning, improving LiDAR semantic segmentation by +1.7% in mIou and 3D object detection by +1.7% in mAP on the nuScenes benchmark compared to the single-task models.
Abstract (translated)
多任务网络相对于单任务网络可能会提高性能和计算效率,从而实现在线部署。然而,当前的点云感知中的多任务架构结合了多个任务特定的点云表示,每个表示都需要单独的特征编码器,使得网络结构变得庞大且运行缓慢。我们提出了PAttFormer,一种仅依赖于点基表示的多任务架构,用于联合语义分割和目标检测。该网络基于基于Transformer的特征编码器使用邻域注意力和池化,以及一种新颖的3D可形变注意检测头设计,用于查询基检测器。与其它基于激光雷达的多任务架构不同,我们的PAttFormer不需要为多个任务特定的点云表示分别编写单独的特征编码器,导致网络规模减小了3倍,同时速度提高了1.4倍,在nuScenes和KITTI基准测试中实现了与单任务模型相当竞争力的性能。我们进行的全面评估显示,多任务学习带来了显著的提高,在nuScenes基准测试中,LIDAR语义分割提高了+1.7%,而在3D目标检测中,提高了+1.7%。
URL
https://arxiv.org/abs/2404.12798