Paper Reading AI Learner

Reconstruct and Represent Video Contents for Captioning via Reinforcement Learning

2019-06-03 06:04:00
Wei Zhang, Bairui Wang, Lin Ma, Wei Liu

Abstract

In this paper, the problem of describing visual contents of a video sequence with natural language is addressed. Unlike previous video captioning work mainly exploiting the cues of video contents to make a language description, we propose a reconstruction network (RecNet) in a novel encoder-decoder-reconstructor architecture, which leverages both forward (video to sentence) and backward (sentence to video) flows for video captioning. Specifically, the encoder-decoder component makes use of the forward flow to produce a sentence description based on the encoded video semantic features. Two types of reconstructors are subsequently proposed to employ the backward flow and reproduce the video features from local and global perspectives, respectively, capitalizing on the hidden state sequence generated by the decoder. Moreover, in order to make a comprehensive reconstruction of the video features, we propose to fuse the two types of reconstructors together. The generation loss yielded by the encoder-decoder component and the reconstruction loss introduced by the reconstructor are jointly cast into training the proposed RecNet in an end-to-end fashion. Furthermore, the RecNet is fine-tuned by CIDEr optimization via reinforcement learning, which significantly boosts the captioning performance. Experimental results on benchmark datasets demonstrate that the proposed reconstructor can boost the performance of video captioning consistently.

Abstract (translated)

本文讨论了用自然语言描述视频序列的视觉内容的问题。与以往的视频字幕工作不同,我们主要利用视频内容的线索进行语言描述,提出了一种新的编码器-解码器-重构器结构中的重构网络(recnet),它利用前向(视频到句子)和后向(句子到视频)流进行视频字幕。具体地说,编码器-解码器组件利用前向流生成基于编码视频语义特征的句子描述。随后提出了两种类型的重构器,利用解码器产生的隐藏状态序列,分别从局部和全局两个角度利用倒流和再现视频特征。此外,为了对视频特征进行全面的重构,我们建议将这两种重构器融合在一起。编码器-解码器组件产生的生成损耗和重构器引入的重构损耗共同构成了端到端的训练方法。此外,通过强化学习,通过苹果酒优化对recnet进行微调,显著提高了字幕性能。在基准数据集上的实验结果表明,所提出的重构器能够始终如一地提高视频字幕的性能。

URL

https://arxiv.org/abs/1906.01452

PDF

https://arxiv.org/pdf/1906.01452.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot