Abstract
This paper presents a novel self-supervised two-frame multi-camera metric depth estimation network, termed M${^2}$Depth, which is designed to predict reliable scale-aware surrounding depth in autonomous driving. Unlike the previous works that use multi-view images from a single time-step or multiple time-step images from a single camera, M${^2}$Depth takes temporally adjacent two-frame images from multiple cameras as inputs and produces high-quality surrounding depth. We first construct cost volumes in spatial and temporal domains individually and propose a spatial-temporal fusion module that integrates the spatial-temporal information to yield a strong volume presentation. We additionally combine the neural prior from SAM features with internal features to reduce the ambiguity between foreground and background and strengthen the depth edges. Extensive experimental results on nuScenes and DDAD benchmarks show M${^2}$Depth achieves state-of-the-art performance. More results can be found in this https URL .
Abstract (translated)
本文提出了一种新颖的自监督两帧多相机metric深度估计网络,称为M2Depth,旨在在自动驾驶中预测可靠的尺度感知周围深度。与之前使用单个时间步或单个相机的多视角图像相比,M2Depth将来自多个摄像头的空间相邻的两帧图像作为输入,并产生高质量的周围深度。我们首先在空间和时间域分别构建成本体积,并提出了一个空间-时间融合模块,将空间-时间信息集成为一个强大的体积展示。此外,将SAM特征的神经先验与内部特征结合以减少前景和背景之间的歧义并加强深度边缘。在nuScenes和DDAD基准上进行的大量实验结果表明,M2Depth实现了与最先进技术相当的表现。更多结果可以在该https://url.org/ URL上找到。
URL
https://arxiv.org/abs/2405.02004