Abstract
Recent advances in image captioning task have led to increasing interests in video captioning task. However, most works on video captioning are focused on generating single input of aggregated features, which hardly deviates from image captioning process and does not fully take advantage of dynamic contents present in videos. We attempt to generate video captions that convey richer contents by temporally segmenting the video with action localization, generating multiple captions from multiple frames, and connecting them with natural language processing techniques, in order to generate a story-like caption. We show that our proposed method can generate captions that are richer in contents and can compete with state-of-the-art method without explicitly using video-level features as input.
Abstract (translated)
图像字幕任务的最新进展已经导致对视频字幕任务的兴趣增加。然而,大多数关于视频字幕的作品着重于生成聚合特征的单一输入,这几乎不会偏离图像字幕处理,并且不能充分利用视频中存在的动态内容。我们尝试生成视频字幕,通过在动作本地化中临时分割视频,从多个帧生成多个字幕,并将它们与自然语言处理技术相结合,以生成类似故事的标题,从而传达更丰富的内容。我们表明,我们提出的方法可以生成内容更丰富的字幕,并且可以与最先进的方法竞争,而无需明确使用视频级功能作为输入。
URL
https://arxiv.org/abs/1605.05440