Abstract
Localization is a key challenge in many robotics applications. In this work we explore LIDAR-based global localization in both urban and natural environments and develop a method suitable for online application. Our approach leverages efficient deep learning architecture capable of learning compact point cloud descriptors directly from 3D data. The method uses an efficient feature space representation of a set of segmented point clouds to match between the current scene and the prior map. We show that down-sampling in the inner layers of the network can significantly reduce computation time without sacrificing performance. We present substantial evaluation of LIDAR-based global localization methods on nine scenarios from six datasets varying between urban, park, forest, and industrial environments. Part of which includes post-processed data from 30 sequences of the Oxford RobotCar dataset, which we make publicly available. Our experiments demonstrate a factor of three reduction of computation, 70% lower memory consumption with marginal loss in localization frequency. The proposed method allows the full pipeline to run on robots with limited computation payload such as drones, quadrupeds, and UGVs as it does not require a GPU at run time.
Abstract (translated)
定位是许多机器人应用中的关键问题。在这项工作中,我们探索了基于激光雷达的在城市和自然环境中全球定位,并开发了适合在线应用的方法。我们的方法利用高效的深度学习架构,能够从3D数据中学习紧凑点云特征表示,该方法使用高效的特征空间表示,将一组分割点云在当前场景和先前地图之间的匹配中进行优化。我们表明,在网络内部层中进行裁剪可以 significantly 降低计算时间,而不会牺牲性能。我们提出了基于激光雷达全球定位方法的实质性评估,涉及六个数据集,包括城市、公园、森林和工业环境等不同场景的九个场景。其中部分场景包括我们公开发布的 Oxford RobotCar 数据集的30个序列的 post-processing 数据。我们的实验表明,计算量下降了三个数量级,内存消耗降低了70%,定位频率略有下降。我们提出的这种方法可以使所有机器人,如无人机、四足动物和无人驾驶汽车等,运行完整的流程,因为它在运行时不需要GPU。
URL
https://arxiv.org/abs/2301.13583