Abstract
We present a new large-scale multilingual video description dataset, VATEX, which contains over 41,250 videos and 825,000 captions in both English and Chinese. Among the captions, there are over 206,000 English-Chinese parallel translation pairs. Compared to the widely-used MSR-VTT dataset, VATEX is multilingual, larger, linguistically complex, and more diverse in terms of both video and natural language descriptions. We also introduce two tasks for video-and-language research based on VATEX: (1) Multilingual Video Captioning, aimed at describing a video in various languages with a compact unified captioning model, and (2) Video-guided Machine Translation, to translate a source language description into the target language using the video information as additional spatiotemporal context. Extensive experiments on the VATEX dataset show that, first, the unified multilingual model can not only produce both English and Chinese descriptions for a video more efficiently, but also offer improved performance over the monolingual models. Furthermore, we demonstrate that the spatiotemporal video context can be effectively utilized to align source and target languages and thus assist machine translation. In the end, we discuss the potentials of using VATEX for other video-and-language research.
Abstract (translated)
我们提出了一个新的大规模多语言视频描述数据集,vatex,它包含超过41250个视频和825000个英文和中文字幕。在字幕中,英汉并列翻译对超过206000对。与广泛使用的MSR-VTT数据集相比,vatex是多语言的,更大,语言复杂,在视频和自然语言描述方面更为多样化。我们还介绍了基于vatex的视频和语言研究的两个任务:(1)多语言视频字幕,旨在用紧凑的统一字幕模型描述不同语言的视频;(2)视频引导机器翻译,以视频信息为补充,将源语言描述翻译成目标语言。时空背景。对vatex数据集的大量实验表明,首先,统一的多语言模型不仅可以更有效地生成视频的英文和中文描述,而且与单语言模型相比,还可以提供更好的性能。此外,我们还证明了时空视频上下文可以有效地用于对齐源语言和目标语言,从而帮助机器翻译。最后,我们讨论了使用vatex进行其他视频和语言研究的可能性。
URL
https://arxiv.org/abs/1904.03493