Abstract
Video captioning, the task of describing the content of a video, has seen some promising improvements in recent years with sequence-to-sequence models, but accurately learning the temporal and logical dynamics involved in the task still remains a challenge, especially given the lack of sufficient annotated data. We improve video captioning by sharing knowledge with two related directed-generation tasks: a temporally-directed unsupervised video prediction task to learn richer context-aware video encoder representations, and a logically-directed language entailment generation task to learn better video-entailed caption decoder representations. For this, we present a many-to-many multi-task learning model that shares parameters across the encoders and decoders of the three tasks. We achieve significant improvements and the new state-of-the-art on several standard video captioning datasets using diverse automatic and human evaluations. We also show mutual multi-task improvements on the entailment generation task.
Abstract (translated)
视频字幕是描述视频内容的任务,近年来,随着序列到序列模型的出现,我们看到了一些有希望的改进,但准确地学习涉及任务的时间和逻辑动态仍然是一个挑战,尤其是考虑到缺乏足够的注释数据。我们通过与两个相关的定向生成任务共享知识来改进视频字幕:用于学习更丰富的上下文感知视频编码器表示的时间指导的无监督视频预测任务,以及用于学习更好的视频需要的字幕解码器的逻辑指导语言包含生成任务表示。为此,我们提出了一个多任务多任务学习模型,它在三个任务的编码器和解码器之间共享参数。我们通过多种自动和人工评估实现了重大改进,并在几个标准视频字幕数据集上实现了最新的最新技术。我们还展示了需求生成任务的相互多任务改进。
URL
https://arxiv.org/abs/1704.07489