Abstract
We propose a deep learning-based LiDAR odometry estimation method called LoRCoN-LO that utilizes the long-term recurrent convolutional network (LRCN) structure. The LRCN layer is a structure that can process spatial and temporal information at once by using both CNN and LSTM layers. This feature is suitable for predicting continuous robot movements as it uses point clouds that contain spatial information. Therefore, we built a LoRCoN-LO model using the LRCN layer, and predicted the pose of the robot through this model. For performance verification, we conducted experiments exploiting a public dataset (KITTI). The results of the experiment show that LoRCoN-LO displays accurate odometry prediction in the dataset. The code is available at this https URL.
Abstract (translated)
我们提出了一种基于深度学习的激光雷达姿态估计方法,名为LoRCoN-LO,该方法利用了长期循环卷积神经网络(LRCN)结构。LRCN层是一种结构,通过同时使用卷积和LSTM层,可以同时处理空间和时间信息。这种特性适合预测连续机器人运动,因为它使用包含空间信息的点云。因此,我们使用LRCN层构建了一个LoRCoN-LO模型,并通过该模型预测机器人的姿态。为了性能验证,我们利用了一个公开数据集(KITTI)进行了实验。实验结果显示,LoRCoN-LO在数据集中显示准确的姿态估计。代码可在该https URL上获取。
URL
https://arxiv.org/abs/2303.11853