Paper Reading AI Learner

Bidirectional Multirate Reconstruction for Temporal Modeling in Videos

2016-11-28 10:32:03
Linchao Zhu, Zhongwen Xu, Yi Yang

Abstract

Despite the recent success of neural networks in image feature learning, a major problem in the video domain is the lack of sufficient labeled data for learning to model temporal information. In this paper, we propose an unsupervised temporal modeling method that learns from untrimmed videos. The speed of motion varies constantly, e.g., a man may run quickly or slowly. We therefore train a Multirate Visual Recurrent Model (MVRM) by encoding frames of a clip with different intervals. This learning process makes the learned model more capable of dealing with motion speed variance. Given a clip sampled from a video, we use its past and future neighboring clips as the temporal context, and reconstruct the two temporal transitions, i.e., present$\rightarrow$past transition and present$\rightarrow$future transition, reflecting the temporal information in different views. The proposed method exploits the two transitions simultaneously by incorporating a bidirectional reconstruction which consists of a backward reconstruction and a forward reconstruction. We apply the proposed method to two challenging video tasks, i.e., complex event detection and video captioning, in which it achieves state-of-the-art performance. Notably, our method generates the best single feature for event detection with a relative improvement of 10.4% on the MEDTest-13 dataset and achieves the best performance in video captioning across all evaluation metrics on the YouTube2Text dataset.

Abstract (translated)

尽管神经网络最近在图像特征学习中取得了成功,但视频领域的一个主要问题是缺乏足够的标记数据来学习对时间信息进行建模。在本文中,我们提出了一种从未修剪的视频中学习的无监督时间建模方法。运动的速度不断变化,例如,一个人可能会快速或缓慢地跑步。因此,我们通过编码具有不同间隔的剪辑帧来训练多速率视觉递归模型(MVRM)。这种学习过程使得学习模型更有能力处理运动速度变化。给定从视频中采样的剪辑,我们使用其过去和未来的相邻剪辑作为时间上下文,并重构两个时间转换,即呈现$ \ rightarrow $过渡并呈现$ \ rightarrow $ future转换,反映时间信息在不同的意见。所提出的方法通过结合由向后重建和向前重建组成的双向重建来同时利用两个过渡。我们将所提出的方法应用于两项具有挑战性的视频任务,即复杂事件检测和视频字幕,其中它实现了最先进的性能。值得注意的是,我们的方法为事件检测生成最佳单一特征,MEDTest-13数据集的相对改善10.4%,并在YouTube2Text数据集上所有评估指标的视频字幕中实现最佳性能。

URL

https://arxiv.org/abs/1611.09053

PDF

https://arxiv.org/pdf/1611.09053.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot