Abstract
This paper presents a novel 3D mapping robot with an omnidirectional field-of-view (FoV) sensor suite composed of a non-repetitive LiDAR and an omnidirectional camera. Thanks to the non-repetitive scanning nature of the LiDAR, an automatic targetless co-calibration method is proposed to simultaneously calibrate the intrinsic parameters for the omnidirectional camera and the extrinsic parameters for the camera and LiDAR, which is crucial for the required step in bringing color and texture information to the point clouds in surveying and mapping tasks. Comparisons and analyses are made to target-based intrinsic calibration and mutual information (MI)-based extrinsic calibration, respectively. With this co-calibrated sensor suite, the hybrid mapping robot integrates both the odometry-based mapping mode and stationary mapping mode. Meanwhile, we proposed a new workflow to achieve coarse-to-fine mapping, including efficient and coarse mapping in a global environment with odometry-based mapping mode; planning for viewpoints in the region-of-interest (ROI) based on the coarse map (relies on the previous work); navigating to each viewpoint and performing finer and more precise stationary scanning and mapping of the ROI. The fine map is stitched with the global coarse map, which provides a more efficient and precise result than the conventional stationary approaches and the emerging odometry-based approaches, respectively.
Abstract (translated)
本文介绍了一种具有 Omnidirectional 视野(FoV)传感器套件的全新的 3D 映射机器人,该传感器套件由一个非重复性的 LiDAR 和一个 Omnidirectional 相机组成。由于 LiDAR 的非重复性扫描特性,我们提出了一种自动目标less 共校准方法,以同时校准 Omnidirectional 相机和相机和 LiDAR 的内参参数,这对于在测量和Mapping 任务中将颜色和纹理信息带到点云中的步骤是至关重要的。我们比较了和分析了基于目标的目标less 共校准方法和基于互信息的目标less 共校准方法。凭借这个共校准的传感器套件,混合映射机器人将基于步进驱动的Mapping 模式和静态Mapping 模式整合在一起。同时,我们提出了一种新的工作流程,以实现粗到精的映射,包括在基于步进驱动的Mapping 环境中高效和粗的映射;基于粗地图(依赖于先前的工作)为感兴趣的区域(ROI)中的视点进行规划(依赖于先前的工作);导航到每个视点,并进行更精细、更精确的静态扫描和映射。精细地图与全球粗地图无缝拼接在一起,提供了比传统静态方法和新出现的基于步进驱动的方法更高效和精确的结果。
URL
https://arxiv.org/abs/2301.12934