Abstract
The recent advance in vision-language models is largely attributed to the abundance of image-text data. We aim to replicate this success for video-language models, but there simply is not enough human-curated video-text data available. We thus resort to fine-tuning a video-language model from a strong image-language baseline with synthesized instructional data. The resulting video-language model is then used to auto-label millions of videos to generate high-quality captions. We show the adapted video-language model performs well on a wide range of video-language benchmarks. For instance, it surpasses the best prior result on open-ended NExT-QA by 2.8%. Besides, our model generates detailed descriptions for previously unseen videos, which provide better textual supervision than existing methods. Experiments show that a video-language dual-encoder model contrastively trained on these auto-generated captions is 3.8% better than the strongest baseline that also leverages vision-language models. Our best model outperforms state-of-the-art methods on MSR-VTT zero-shot text-to-video retrieval by 6%.
Abstract (translated)
近年来,视觉语言模型的进步很大程度上归功于图像-文本数据的丰富。我们试图复制这一成功,但目前可用的视频-文本数据仅仅足够少,无法满足需求。因此,我们不得不从具有强图像语言基线的视频语言模型进行微调,并使用合成指令数据进行微调。这样,我们得到的视频语言模型被用于自动标注数百万个视频以生成高质量字幕。我们证明了调整后的视频语言模型在广泛的视频语言基准测试中表现良好。例如,它比open-ended NExT-QA的最佳先前结果提高了2.8%。此外,我们的模型为之前未见过的视频生成了详细的描述,这比现有方法提供了更好的文本监督。实验证明,在为这些自动生成的字幕进行视频语言双重编码的对比训练后,视频语言双编码器模型比最强的基线模型提高了3.8%。我们的最佳模型在MSR-VTT零散文本到视频检索上比最强的基线模型提高了6%。
URL
https://arxiv.org/abs/2401.06129