Paper Reading AI Learner

Coarse-to-fine Hybrid 3D Mapping System with Co-calibrated Omnidirectional Camera and Non-repetitive LiDAR

2023-01-30 14:31:49
Ziliang Miao, Buwei He, Wenya Xie, Wenquan Zhao, Xiao Huang, Jian Bai, Xiaoping Hong

Abstract

This paper presents a novel 3D mapping robot with an omnidirectional field-of-view (FoV) sensor suite composed of a non-repetitive LiDAR and an omnidirectional camera. Thanks to the non-repetitive scanning nature of the LiDAR, an automatic targetless co-calibration method is proposed to simultaneously calibrate the intrinsic parameters for the omnidirectional camera and the extrinsic parameters for the camera and LiDAR, which is crucial for the required step in bringing color and texture information to the point clouds in surveying and mapping tasks. Comparisons and analyses are made to target-based intrinsic calibration and mutual information (MI)-based extrinsic calibration, respectively. With this co-calibrated sensor suite, the hybrid mapping robot integrates both the odometry-based mapping mode and stationary mapping mode. Meanwhile, we proposed a new workflow to achieve coarse-to-fine mapping, including efficient and coarse mapping in a global environment with odometry-based mapping mode; planning for viewpoints in the region-of-interest (ROI) based on the coarse map (relies on the previous work); navigating to each viewpoint and performing finer and more precise stationary scanning and mapping of the ROI. The fine map is stitched with the global coarse map, which provides a more efficient and precise result than the conventional stationary approaches and the emerging odometry-based approaches, respectively.

Abstract (translated)

本文介绍了一种具有 Omnidirectional 视野(FoV)传感器套件的全新的 3D 映射机器人,该传感器套件由一个非重复性的 LiDAR 和一个 Omnidirectional 相机组成。由于 LiDAR 的非重复性扫描特性,我们提出了一种自动目标less 共校准方法,以同时校准 Omnidirectional 相机和相机和 LiDAR 的内参参数,这对于在测量和Mapping 任务中将颜色和纹理信息带到点云中的步骤是至关重要的。我们比较了和分析了基于目标的目标less 共校准方法和基于互信息的目标less 共校准方法。凭借这个共校准的传感器套件,混合映射机器人将基于步进驱动的Mapping 模式和静态Mapping 模式整合在一起。同时,我们提出了一种新的工作流程,以实现粗到精的映射,包括在基于步进驱动的Mapping 环境中高效和粗的映射;基于粗地图(依赖于先前的工作)为感兴趣的区域(ROI)中的视点进行规划(依赖于先前的工作);导航到每个视点,并进行更精细、更精确的静态扫描和映射。精细地图与全球粗地图无缝拼接在一起,提供了比传统静态方法和新出现的基于步进驱动的方法更高效和精确的结果。

URL

https://arxiv.org/abs/2301.12934

PDF

https://arxiv.org/pdf/2301.12934.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot