Paper Reading AI Learner

Spatio-Temporal Multisensor Calibration Based on Gaussian Processes Moving Object Tracking

2019-04-08 16:53:44
Juraj Peršić, Luka Petrović, Ivan Marković, Ivan Petrović

Abstract

Perception is one of the key abilities of autonomous mobile robotic systems, which often relies on fusion of heterogeneous sensors. Although this heterogeneity presents a challenge for sensor calibration, it is also the main prospect for reliability and robustness of autonomous systems. In this paper, we propose a method for multisensor calibration based on Gaussian processes (GPs) estimated moving object trajectories, resulting with temporal and extrinsic parameters. The appealing properties of the proposed temporal calibration method are: coordinate frame invariance, thus avoiding prior extrinsic calibration, theoretically grounded batch state estimation and interpolation using GPs, computational efficiency with O(n) complexity, leveraging data already available in autonomous robot platforms, and the end result enabling 3D point-to-point extrinsic multisensor calibration. The proposed method is validated both in simulations and real-world experiments. For real-world experiment we evaluated the method on two multisensor systems: an externally triggered stereo camera, thus having temporal ground truth readily available, and a heterogeneous combination of a camera and motion capture system. The results show that the estimated time delays are accurate up to a fraction of the fastest sensor sampling time.

Abstract (translated)

感知能力是自主移动机器人系统的关键能力之一,它往往依赖于异构传感器的融合。尽管这种异质性对传感器校准提出了挑战,但它也是自主系统可靠性和鲁棒性的主要前景。本文提出了一种基于高斯过程(GPS)估计运动目标轨迹的多传感器标定方法。所提出的时间校正方法具有以下吸引人的特性:坐标系不变性,从而避免了先前的非本征校正;理论上基于GPS的批量状态估计和插值;具有O(N)复杂性的计算效率;利用自主机器人平台中已有的数据;以及最终结果。lt启用三维点对点外部多传感器校准。该方法在仿真和实际实验中都得到了验证。在实际实验中,我们评估了两个多传感器系统上的方法:一个外部触发的立体摄像机,因此很容易获得时间地面真实性,以及摄像机和运动捕捉系统的异构组合。结果表明,估计的时间延迟精确到最快传感器采样时间的一小部分。

URL

https://arxiv.org/abs/1904.04187

PDF

https://arxiv.org/pdf/1904.04187.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot