Abstract
Recent works have revealed the superiority of feature-level fusion for cross-modal 3D object detection, where fine-grained feature propagation from 2D image pixels to 3D LiDAR points has been widely adopted for performance improvement. Still, the potential of heterogeneous feature propagation between 2D and 3D domains has not been fully explored. In this paper, in contrast to existing pixel-to-point feature propagation, we investigate an opposite point-to-pixel direction, allowing point-wise features to flow inversely into the 2D image branch. Thus, when jointly optimizing the 2D and 3D streams, the gradients back-propagated from the 2D image branch can boost the representation ability of the 3D backbone network working on LiDAR point clouds. Then, combining pixel-to-point and point-to-pixel information flow mechanisms, we construct an bidirectional feature propagation framework, dubbed BiProDet. In addition to the architectural design, we also propose normalized local coordinates map estimation, a new 2D auxiliary task for the training of the 2D image branch, which facilitates learning local spatial-aware features from the image modality and implicitly enhances the overall 3D detection performance. Extensive experiments and ablation studies validate the effectiveness of our method. Notably, we rank $\mathbf{1^{\mathrm{st}}}$ on the highly competitive KITTI benchmark on the cyclist class by the time of submission. The source code is available at this https URL.
Abstract (translated)
最近的研究表明,特征级别的融合对于跨模态3D物体检测来说更为优越,其中从2D图像像素到3D LiDAR点的高度精细特征传播已经被广泛应用于性能提升。然而,2D和3D域之间的不同特征传播潜力仍未得到完全探索。在本文中,与现有的像素到点特征传播相反,我们研究了点到像素的方向,允许点特征逆向进入2D图像分支。因此,当同时优化2D和3D流时,从2D图像分支返回的梯度可以增强3D骨架网络在LiDAR点云上的表示能力。然后,结合像素到点和点到像素的信息流动机制,我们构建了一个双向特征传播框架,称为BiProdet。除了建筑设计外,我们还提出了正则化本地坐标地图估计,这是2D图像分支训练的新2D辅助任务,这便于从图像模态学习本地空间 aware特征,并隐含地增强整个3D检测性能。广泛的实验和烧穿研究验证了我们方法的有效性。值得注意的是,我们在提交时将BiProdet排名在竞争激烈的KTI基准测试中排在第一。源代码在此httpsURL上可用。
URL
https://arxiv.org/abs/2301.09077