Abstract
We propose a novel real-time LiDAR intensity image-based simultaneous localization and mapping method , which addresses the geometry degeneracy problem in unstructured environments. Traditional LiDAR-based front-end odometry mostly relies on geometric features such as points, lines and planes. A lack of these features in the environment can lead to the failure of the entire odometry system. To avoid this problem, we extract feature points from the LiDAR-generated point cloud that match features identified in LiDAR intensity images. We then use the extracted feature points to perform scan registration and estimate the robot ego-movement. For the back-end, we jointly optimize the distance between the corresponding feature points, and the point to plane distance for planes identified in the map. In addition, we use the features extracted from intensity images to detect loop closure candidates from previous scans and perform pose graph optimization. Our experiments show that our method can run in real time with high accuracy and works well with illumination changes, low-texture, and unstructured environments.
Abstract (translated)
我们提出了一种实时激光雷达强度图像基同时定位和制图方法,该方法解决了在无结构环境下的几何混叠问题。传统基于激光雷达的前后端距离测量主要依赖于几何特征,如点、线、平面等。在环境缺乏这些特征时,可能会导致整个距离测量系统的失效。为了避免这个问题,我们从激光雷达生成点云中提取特征点,与在激光雷达强度图像中识别的特征点匹配。然后我们使用提取的特征点进行扫描校准和估计机器人自我运动。对于后端,我们共同优化对应特征点之间的距离,以及在地图中识别的平面到点距离。此外,我们还使用从强度图像中提取的特征点检测先前扫描中的循环闭合候选对象并进行姿态图优化。我们的实验表明,我们的方法可以在实时中具有很高的准确性,并在照明变化、低纹理和无结构环境下良好运行。
URL
https://arxiv.org/abs/2301.09257