Paper Reading AI Learner

Recurrent Space-time Graphs for Video Understanding

2019-04-11 08:51:48
Andrei Nicolicioiu, Iulia Duta, Marius Leordeanu

Abstract

Visual learning in the space-time domain remains a very challenging problem in artificial intelligence. Current computational models for understanding video data are heavily rooted in the classical single-image based paradigm. It is not yet well understood how to integrate visual information from space and time into a single, general model. We propose a neural graph model, recurrent in space and time, suitable for capturing both the appearance and the complex interactions of different entities and objects within the changing world scene. Nodes and links in our graph have dedicated neural networks for processing information. Edges process messages between connected nodes at different locations and scales or between past and present time. Nodes compute over features extracted from local parts in space and time and over messages received from their neighbours and previous memory states. Messages are passed iteratively in order to transmit information globally and establish long range interactions. Our model is general and could learn to recognize a variety of high level spatio-temporal concepts and be applied to different learning tasks. We demonstrate, through extensive experiments, a competitive performance over strong baselines on the tasks of recognizing complex patterns of movement in video.

Abstract (translated)

时空领域的视觉学习仍然是人工智能中一个非常具有挑战性的问题。目前用于理解视频数据的计算模型在很大程度上植根于传统的基于单一图像的范式。如何将空间和时间的视觉信息整合到一个单一的、通用的模型中还不太清楚。我们提出了一个在空间和时间上反复出现的神经图模型,适用于捕捉变化世界场景中不同实体和对象的外观和复杂交互。图中的节点和链接都有专门的神经网络来处理信息。边缘处理在不同位置和尺度上或过去和现在之间连接的节点之间的消息。节点计算从空间和时间的本地部分提取的特征,以及从其邻居和以前的内存状态接收的消息。消息以迭代方式传递,以便在全球范围内传输信息并建立远程交互。我们的模型是通用的,可以学习识别各种高层次的时空概念,并应用于不同的学习任务。我们通过广泛的实验证明,在识别视频中复杂运动模式的任务上,我们的表现优于强大的基线。

URL

https://arxiv.org/abs/1904.05582

PDF

https://arxiv.org/pdf/1904.05582.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot