Abstract
Recently, video captioning has been attracting an increasing amount of interest, due to its potential for improving accessibility and information retrieval. While existing methods rely on different kinds of visual features and model structures, they do not fully exploit relevant semantic information. We present an extensible approach to jointly leverage several sorts of visual features and semantic attributes. Our novel architecture builds on LSTMs for sentence generation, with several attention layers and two multimodal layers. The attention mechanism learns to automatically select the most salient visual features or semantic attributes, and the multimodal layer yields overall representations for the input and outputs of the sentence generation component. Experimental results on the challenging MSVD and MSR-VTT datasets show that our framework outperforms the state-of-the-art approaches, while ground truth based semantic attributes are able to further elevate the output quality to a near-human level.
Abstract (translated)
最近,视频字幕吸引了越来越多的兴趣,因为它有可能改善可访问性和信息检索。虽然现有的方法依赖于不同类型的视觉特征和模型结构,但它们并没有充分利用相关的语义信息。我们提出了一种可扩展的方法来共同利用几种视觉特征和语义属性。我们的新颖架构建立在LSTM上,用于生成句子,具有多个关注层和两个多模式层。注意机制学习自动选择最显着的视觉特征或语义属性,并且多模态层产生句子生成组件的输入和输出的整体表示。对具有挑战性的MSVD和MSR-VTT数据集的实验结果表明,我们的框架优于最先进的方法,而基于地面实况的语义属性能够将输出质量进一步提升至接近人类的水平。
URL
https://arxiv.org/abs/1612.00234