Abstract
We introduce a high-fidelity neural implicit dense visual Simultaneous Localization and Mapping (SLAM) system, termed DF-SLAM. In our work, we employ dictionary factors for scene representation, encoding the geometry and appearance information of the scene as a combination of basis and coefficient factors. Compared to neural implicit SLAM methods that directly encode scene information as features, our method exhibits superior scene detail reconstruction capabilities and more efficient memory usage, while our model size is insensitive to the size of the scene map, making our method more suitable for large-scale scenes. Additionally, we employ feature integration rendering to accelerate color rendering speed while ensuring color rendering quality, further enhancing the real-time performance of our neural SLAM method. Extensive experiments on synthetic and real-world datasets demonstrate that our method is competitive with existing state-of-the-art neural implicit SLAM methods in terms of real-time performance, localization accuracy, and scene reconstruction quality. Our source code is available at this https URL.
Abstract (translated)
我们介绍了一种高保真度的神经隐式密集视觉同时定位与映射(SLAM)系统,称为DF-SLAM。在我们的工作中,我们使用词典因子来表示场景表示,将场景的几何和视觉信息表示为基和系数因子的组合。与直接编码场景信息的神经隐式SLAM方法相比,我们的方法具有卓越的场景详细信息重构能力和更高效的内存使用率,而我们的模型大小对场景图的大小不敏感,使得我们的方法更适合于大规模场景。此外,我们还使用特征集成渲染来加速色彩渲染速度,同时确保色彩渲染质量,进一步提高了我们神经SLAM方法的实时性能。在合成和真实世界数据集上进行的大量实验证明,我们的方法在实时性能、定位准确性和场景重建质量方面与现有神经隐式SLAM方法具有竞争性。我们的源代码可在此处访问:https:// this URL。
URL
https://arxiv.org/abs/2404.17876