Paper Reading AI Learner

Event-based Moving Object Detection and Tracking

2018-07-23 02:25:54
Anton Mitrokhin, Cornelia Fermuller, Chethan Parameshwara, Yiannis Aloimonos

Abstract

Event-based vision sensors, such as the Dynamic Vision Sensor (DVS), are ideally suited for real-time motion analysis. The unique properties encompassed in the readings of such sensors provide high temporal resolution, superior sensitivity to light and low latency. These properties provide the grounds to estimate motion extremely reliably in the most sophisticated scenarios but they come at a price - modern event-based vision sensors have extremely low resolution and produce a lot of noise. Moreover, the asynchronous nature of the event stream calls for novel algorithms. This paper presents a new, efficient approach to object tracking with asynchronous cameras. We present a novel event stream representation which enables us to utilize information about the dynamic (temporal) component of the event stream, and not only the spatial component, at every moment of time. This is done by approximating the 3D geometry of the event stream with a parametric model; as a result, the algorithm is capable of producing the motion-compensated event stream (effectively approximating egomotion), and without using any form of external sensors in extremely low-light and noisy conditions without any form of feature tracking or explicit optical flow computation. We demonstrate our framework on the task of independent motion detection and tracking, where we use the temporal model inconsistencies to locate differently moving objects in challenging situations of very fast motion.

Abstract (translated)

基于事件的视觉传感器,如动态视觉传感器(DVS),非常适合实时运动分析。这些传感器读数中包含的独特属性提供了高时间分辨率,对光的优异灵敏度和低延迟。这些属性为在最复杂的情​​况下极其可靠地估计运动提供了基础,但它们付出了代价 - 现代基于事件的视觉传感器具有极低的分辨率并产生大量噪声。而且,事件流的异步性质需要新颖的算法。  本文介绍了一种新的,有效的异步摄像机对象跟踪方法。我们提出了一种新颖的事件流表示,它使我们能够在每个时刻利用有关事件流的动态(时间)分量的信息,而不仅仅是空间分量。这是通过用参数模型近似事件流的3D几何来完成的;结果,该算法能够产生运动补偿事件流(有效地近似运动),并且在极低光和噪声条件下不使用任何形式的外部传感器而无需任何形式的特征跟踪或显式光流计算。我们展示了我们关于独立运动检测和跟踪任务的框架,其中我们使用时间模型不一致性来在非常快速运动的挑战性情况下定位不同的运动物体。

URL

https://arxiv.org/abs/1803.04523

PDF

https://arxiv.org/pdf/1803.04523.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot