Paper Reading AI Learner

Semi-Parametric Video-Grounded Text Generation

2023-01-27 03:00:43
Sungdong Kim, Jin-Hwa Kim, Jiyoung Lee, Minjoon Seo

Abstract

Efficient video-language modeling should consider the computational cost because of a large, sometimes intractable, number of video frames. Parametric approaches such as the attention mechanism may not be ideal since its computational cost quadratically increases as the video length increases. Rather, previous studies have relied on offline feature extraction or frame sampling to represent the video efficiently, focusing on cross-modal modeling in short video clips. In this paper, we propose a semi-parametric video-grounded text generation model, SeViT, a novel perspective on scalable video-language modeling toward long untrimmed videos. Treating a video as an external data store, SeViT includes a non-parametric frame retriever to select a few query-relevant frames from the data store for a given query and a parametric generator to effectively aggregate the frames with the query via late fusion methods. Experimental results demonstrate our method has a significant advantage in longer videos and causal video understanding. Moreover, our model achieves the new state of the art on four video-language datasets, iVQA (+4.8), Next-QA (+6.9), and Activitynet-QA (+4.8) in accuracy, and MSRVTT-Caption (+3.6) in CIDEr.

Abstract (translated)

高效的视频-语言建模应该考虑计算成本,因为视频帧数量庞大,有时甚至无法处理。例如,注意力机制等参数化方法可能不是理想的,因为其计算成本随着视频长度的增加呈quadratic增长。相反,以前的研究依赖离线特征提取或帧采样来高效地表示视频,重点在短小的视频片段中的跨媒体建模。在本文中,我们提出了一种半参数化的视频grounded文本生成模型,SeViT,提出了一种对长期未修剪视频的 scalable 视频-语言建模的新视角。将视频视为外部数据存储,SeViT包括一个非参数帧检索器,以选择数据存储中与给定查询相关的一些帧,并一个参数化生成器,通过 late fusion方法有效地合并与查询相关的帧。实验结果显示,我们的方法和更长的视频以及及时视频理解具有显著优势。此外,我们的模型在四个视频-语言数据集上达到了新的技术水平,包括 iVQA(+4.8)、Next-QA(+6.9)、和Activitynet-QA(+4.8),在 CIDEr 中达到了 MSRVTT-Caption(+3.6)的水平。

URL

https://arxiv.org/abs/2301.11507

PDF

https://arxiv.org/pdf/2301.11507.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot