Abstract
Efficient video-language modeling should consider the computational cost because of a large, sometimes intractable, number of video frames. Parametric approaches such as the attention mechanism may not be ideal since its computational cost quadratically increases as the video length increases. Rather, previous studies have relied on offline feature extraction or frame sampling to represent the video efficiently, focusing on cross-modal modeling in short video clips. In this paper, we propose a semi-parametric video-grounded text generation model, SeViT, a novel perspective on scalable video-language modeling toward long untrimmed videos. Treating a video as an external data store, SeViT includes a non-parametric frame retriever to select a few query-relevant frames from the data store for a given query and a parametric generator to effectively aggregate the frames with the query via late fusion methods. Experimental results demonstrate our method has a significant advantage in longer videos and causal video understanding. Moreover, our model achieves the new state of the art on four video-language datasets, iVQA (+4.8), Next-QA (+6.9), and Activitynet-QA (+4.8) in accuracy, and MSRVTT-Caption (+3.6) in CIDEr.
Abstract (translated)
高效的视频-语言建模应该考虑计算成本,因为视频帧数量庞大,有时甚至无法处理。例如,注意力机制等参数化方法可能不是理想的,因为其计算成本随着视频长度的增加呈quadratic增长。相反,以前的研究依赖离线特征提取或帧采样来高效地表示视频,重点在短小的视频片段中的跨媒体建模。在本文中,我们提出了一种半参数化的视频grounded文本生成模型,SeViT,提出了一种对长期未修剪视频的 scalable 视频-语言建模的新视角。将视频视为外部数据存储,SeViT包括一个非参数帧检索器,以选择数据存储中与给定查询相关的一些帧,并一个参数化生成器,通过 late fusion方法有效地合并与查询相关的帧。实验结果显示,我们的方法和更长的视频以及及时视频理解具有显著优势。此外,我们的模型在四个视频-语言数据集上达到了新的技术水平,包括 iVQA(+4.8)、Next-QA(+6.9)、和Activitynet-QA(+4.8),在 CIDEr 中达到了 MSRVTT-Caption(+3.6)的水平。
URL
https://arxiv.org/abs/2301.11507