Abstract
In this paper, the problem of describing visual contents of a video sequence with natural language is addressed. Unlike previous video captioning work mainly exploiting the cues of video contents to make a language description, we propose a reconstruction network (RecNet) with a novel encoder-decoder-reconstructor architecture, which leverages both the forward (video to sentence) and backward (sentence to video) flows for video captioning. Specifically, the encoder-decoder makes use of the forward flow to produce the sentence description based on the encoded video semantic features. Two types of reconstructors are customized to employ the backward flow and reproduce the video features based on the hidden state sequence generated by the decoder. The generation loss yielded by the encoder-decoder and the reconstruction loss introduced by the reconstructor are jointly drawn into training the proposed RecNet in an end-to-end fashion. Experimental results on benchmark datasets demonstrate that the proposed reconstructor can boost the encoder-decoder models and leads to significant gains in video caption accuracy.
Abstract (translated)
在本文中,描述了用自然语言描述视频序列的视觉内容的问题。与以前的视频字幕工作主要利用视频内容提示进行语言描述不同,我们提出了一种具有新型编解码器 - 重构器架构的重构网络(RecNet),其利用前向(视频到句子)和后向(句子到视频)流动视频字幕。具体而言,编码器 - 解码器利用正向流程来基于编码的视频语义特征来产生句子描述。两种类型的重构器被定制为采用反向流并且基于由解码器生成的隐藏状态序列来再现视频特征。由编码器 - 解码器产生的产生损失和由重建器引入的重构损失被共同引入到以端对端方式训练提议的RecNet。基准数据集上的实验结果表明,所提出的重构器可以提升编码器 - 解码器模型,并显着提高视频字幕的准确性。
URL
https://arxiv.org/abs/1803.11438