Paper Reading AI Learner

Text-to-Clip Video Retrieval with Early Fusion and Re-Captioning

2018-04-13 20:46:37
Huijuan Xu, Kun He, Leonid Sigal, Stan Sclaroff, Kate Saenko

Abstract

We propose a novel method capable of retrieving clips from untrimmed videos based on natural language queries. This cross-modal retrieval task plays a key role in visual-semantic understanding, and requires localizing clips in time and computing their similarity to the query sentence. Current methods generate sentence and video embeddings and then compare them using a late fusion approach, but this ignores the word order in queries and prevents more fine-grained comparisons. Motivated by the need for fine-grained multi-modal feature fusion, we propose a novel early fusion embedding approach that combines video and language information at the word level. Furthermore, we use the inverse task of dense video captioning as a side-task to improve the learned embedding. Our full model combines these components with an efficient proposal pipeline that performs accurate localization of potential video clips. We present a comprehensive experimental validation on two large-scale text-to-clip datasets (Charades-STA and DiDeMo) and attain state-of-the-art retrieval results with our model.

Abstract (translated)

我们提出了一种新颖的方法,能够根据自然语言查询从未修剪的视频中检索剪辑。这种跨模式检索任务在视觉 - 语义理解中起着关键作用,并且需要及时定位剪辑并计算它们与查询句子的相似度。目前的方法可以生成句子和视频嵌入,然后使用后期融合方法进行比较,但是这会忽略查询中的单词顺序并阻止更细粒度的比较。受到对细粒度多模式特征融合的需求的启发,我们提出了一种新颖的融合嵌入方法,该方法在字级别结合视频和语言信息。此外,我们使用密集视频字幕的逆向任务作为改进学习嵌入的侧面任务。我们的完整模型将这些组件与高效的提案管道相结合,可以对潜在视频剪辑进行精确定位。我们对两个大规模的文本到剪辑数据集(Charades-STA和DiDeMo)进行了全面的实验验证,并使用我们的模型获得了最先进的检索结果。

URL

https://arxiv.org/abs/1804.05113

PDF

https://arxiv.org/pdf/1804.05113.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot