Abstract
Self-supervised learning has become increasingly important to leverage the abundance of unlabeled data available on platforms like YouTube. Whereas most existing approaches learn low-level representations, we propose a joint visual-linguistic model to learn high-level features without any explicit supervision. In particular, inspired by its recent success in language modeling, we build upon the BERT model to learn bidirectional joint distributions over sequences of visual and linguistic tokens, derived from vector quantization of video data and off-the-shelf speech recognition outputs, respectively. We use this model in a number of tasks, including action classification and video captioning. We show that it can be applied directly to open-vocabulary classification, and confirm that large amounts of training data and cross-modal information are critical to performance. Furthermore, we outperform the state-of-the-art on video captioning, and quantitative results verify that the model learns high-level semantic features.
Abstract (translated)
自我监督学习对于充分利用YouTube等平台上的大量未标记数据变得越来越重要。虽然大多数现有的方法都学习低级的表示,但我们提出了一个联合的视觉语言模型来学习高级的特征,而不需要任何明确的监督。特别是,受其最近在语言建模方面取得成功的启发,我们建立了伯特模型,分别从视频数据的矢量量化和现成的语音识别输出中学习视觉和语言标记序列的双向联合分布。我们在许多任务中使用这个模型,包括动作分类和视频字幕。研究表明,该方法可以直接应用于开放式词汇分类,并证实大量的训练数据和跨模态信息对性能至关重要。此外,我们在视频字幕方面的表现优于最先进的技术,定量结果验证了该模型学习高级语义特征。
URL
https://arxiv.org/abs/1904.01766