Paper Reading AI Learner

Edge-Direct Visual Odometry

2019-06-11 21:53:49
Kevin Christensen, Martial Hebert

Abstract

In this paper we propose an edge-direct visual odometry algorithm that efficiently utilizes edge pixels to find the relative pose that minimizes the photometric error between images. Prior work on exploiting edge pixels instead treats edges as features and employ various techniques to match edge lines or pixels, which adds unnecessary complexity. Direct methods typically operate on all pixel intensities, which proves to be highly redundant. In contrast our method builds on direct visual odometry methods naturally with minimal added computation. It is not only more efficient than direct dense methods since we iterate with a fraction of the pixels, but also more accurate. We achieve high accuracy and efficiency by extracting edges from only one image, and utilize robust Gauss-Newton to minimize the photometric error of these edge pixels. This simultaneously finds the edge pixels in the reference image, as well as the relative camera pose that minimizes the photometric error. We test various edge detectors, including learned edges, and determine that the optimal edge detector for this method is the Canny edge detection algorithm using automatic thresholding. We highlight key differences between our edge direct method and direct dense methods, in particular how higher levels of image pyramids can lead to significant aliasing effects and result in incorrect solution convergence. We show experimentally that reducing the photometric error of edge pixels also reduces the photometric error of all pixels, and we show through an ablation study the increase in accuracy obtained by optimizing edge pixels only. We evaluate our method on the RGB-D TUM benchmark on which we achieve state-of-the-art performance.

Abstract (translated)

本文提出了一种边缘直接视觉里程测量算法,该算法有效地利用边缘像素来寻找相对姿态,使图像之间的光度误差最小。之前的工作是利用边缘像素,而不是将边缘视为特征,并使用各种技术来匹配边缘线或像素,这增加了不必要的复杂性。直接方法通常在所有像素强度上运行,这被证明是高度冗余的。相比之下,我们的方法建立在直接视觉里程表方法的基础上,自然增加了最小的计算量。它不仅比直接密集的方法更有效,因为我们用像素的一部分迭代,而且更精确。我们只从一幅图像中提取边缘,从而获得高精度和高效率,并利用鲁棒高斯牛顿将边缘像素的光度误差降到最低。这将同时查找参考图像中的边缘像素,以及使光度误差最小化的相对相机姿势。我们测试了各种边缘检测器,包括学习边缘,并确定了该方法的最佳边缘检测器是使用自动阈值的Canny边缘检测算法。我们强调了边缘直接法和直接密集法之间的关键区别,特别是图像金字塔的高级别会导致明显的混叠效果,并导致不正确的解决方案收敛。实验表明,减小边缘像素的光度误差也可以减小所有像素的光度误差,并且通过烧蚀研究表明,仅通过优化边缘像素就可以提高精度。我们在实现最先进性能的RGB-D TUM基准上评估我们的方法。

URL

https://arxiv.org/abs/1906.04838

PDF

https://arxiv.org/pdf/1906.04838.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot