Abstract
Architectures that first convert point clouds to a grid representation and then apply convolutional neural networks achieve good performance for radar-based object detection. However, the transfer from irregular point cloud data to a dense grid structure is often associated with a loss of information, due to the discretization and aggregation of points. In this paper, we propose a novel architecture, multi-scale KPPillarsBEV, that aims to mitigate the negative effects of grid rendering. Specifically, we propose a novel grid rendering method, KPBEV, which leverages the descriptive power of kernel point convolutions to improve the encoding of local point cloud contexts during grid rendering. In addition, we propose a general multi-scale grid rendering formulation to incorporate multi-scale feature maps into convolutional backbones of detection networks with arbitrary grid rendering methods. We perform extensive experiments on the nuScenes dataset and evaluate the methods in terms of detection performance and computational complexity. The proposed multi-scale KPPillarsBEV architecture outperforms the baseline by 5.37% and the previous state of the art by 2.88% in Car AP4.0 (average precision for a matching threshold of 4 meters) on the nuScenes validation set. Moreover, the proposed single-scale KPBEV grid rendering improves the Car AP4.0 by 2.90% over the baseline while maintaining the same inference speed.
Abstract (translated)
将点云转换为网格表示,并应用卷积神经网络的目标实现了良好的雷达目标检测性能。然而,从不规则点云数据到稠密网格结构的传播常常伴随着信息的损失,因为点的组合和聚集。在本文中,我们提出了一种新架构,称为多尺度KPillarsBEV,旨在减轻网格渲染的负面影响。具体而言,我们提出了一种新的网格渲染方法,称为KPBEV,利用内核点卷积的描述能力,在网格渲染期间改善本地点云上下文的编码。此外,我们提出了一种通用的多尺度网格渲染框架,将多尺度特征映射添加到检测网络的卷积骨架中。我们对nuScenes数据集进行了广泛的实验,并按照检测性能和计算复杂性进行评估。提出的多尺度KPillarsBEV架构在nuScenes验证集上比基准表现更好,比先前的技术水平提高了2.88%。此外,提出的单尺度KPBEV网格渲染在保持同样推理速度的情况下提高了nuScenes验证集上的Car AP4.0表现,比基准提高了2.90%。
URL
https://arxiv.org/abs/2305.15836