Abstract
Localization and mapping are critical tasks for various applications such as autonomous vehicles and robotics. The challenges posed by outdoor environments present particular complexities due to their unbounded characteristics. In this work, we present MM-Gaussian, a LiDAR-camera multi-modal fusion system for localization and mapping in unbounded scenes. Our approach is inspired by the recently developed 3D Gaussians, which demonstrate remarkable capabilities in achieving high rendering quality and fast rendering speed. Specifically, our system fully utilizes the geometric structure information provided by solid-state LiDAR to address the problem of inaccurate depth encountered when relying solely on visual solutions in unbounded, outdoor scenarios. Additionally, we utilize 3D Gaussian point clouds, with the assistance of pixel-level gradient descent, to fully exploit the color information in photos, thereby achieving realistic rendering effects. To further bolster the robustness of our system, we designed a relocalization module, which assists in returning to the correct trajectory in the event of a localization failure. Experiments conducted in multiple scenarios demonstrate the effectiveness of our method.
Abstract (translated)
定位和映射是各种应用(如自动驾驶和机器人)的关键任务。户外环境所带来的挑战由于其无限制的特点而显得尤为复杂。在这项工作中,我们提出了MM-Gaussian,一种用于在无限制场景中进行定位和映射的LiDAR相机多模态融合系统。我们的方法受到最近开发的3D Gaussians的启发,这些方法在实现高渲染质量和快速渲染速度方面表现出色。具体来说,我们的系统充分利用了固态LiDAR提供的几何结构信息来解决在无限制、户外场景中仅依赖视觉解决方案时遇到的准确度问题。此外,我们还利用3D Gaussian点云,通过级联的像素级梯度下降,充分利用照片中的颜色信息,从而实现真实渲染效果。为了进一步加强系统的稳健性,我们设计了一个重新定位模块,在定位失败时协助返回正确的轨迹。在多个场景进行的实验证明了我们的方法的有效性。
URL
https://arxiv.org/abs/2404.04026