Abstract
Dense scene reconstruction for photo-realistic view synthesis has various applications, such as VR/AR, autonomous vehicles. However, most existing methods have difficulties in large-scale scenes due to three core challenges: \textit{(a) inaccurate depth input.} Accurate depth input is impossible to get in real-world large-scale scenes. \textit{(b) inaccurate pose estimation.} Most existing approaches rely on accurate pre-estimated camera poses. \textit{(c) insufficient scene representation capability.} A single global radiance field lacks the capacity to effectively scale to large-scale scenes. To this end, we propose an incremental joint learning framework, which can achieve accurate depth, pose estimation, and large-scale scene reconstruction. A vision transformer-based network is adopted as the backbone to enhance performance in scale information estimation. For pose estimation, a feature-metric bundle adjustment (FBA) method is designed for accurate and robust camera tracking in large-scale scenes. In terms of implicit scene representation, we propose an incremental scene representation method to construct the entire large-scale scene as multiple local radiance fields to enhance the scalability of 3D scene representation. Extended experiments have been conducted to demonstrate the effectiveness and accuracy of our method in depth estimation, pose estimation, and large-scale scene reconstruction.
Abstract (translated)
为了实现照片现实感视图合成,密集场景重建在虚拟现实/增强现实和自动驾驶等领域具有各种应用价值。然而,由于三个核心挑战,大多数现有方法在大型场景中存在困难:(a)不准确的深度输入。在现实世界的大型场景中,准确获取深度输入是不可能的。(b)不准确的姿态估计。大多数现有方法依赖于精确预估的相机姿态。(c)不足的场景表示能力。为此,我们提出了一个逐步联合学习框架,可以实现精确的深度、姿态估计和大规模场景重建。网络采用一个基于视觉变换器的网络来增强在规模信息估计方面的性能。对于姿态估计,我们设计了一种基于特征 metrics 的捆绑调整(FBA)方法,用于在大型场景中实现准确且鲁棒的目标跟踪。在隐式场景表示方面,我们提出了一种逐步场景表示方法,将整个大型场景表示为多个局部辐射场,以增强 3D 场景表示的可扩展性。已经进行了扩展实验,以证明我们方法在深度估计、姿态估计和大规模场景重建方面的有效性和准确性。
URL
https://arxiv.org/abs/2404.06050