Paper Reading AI Learner

TARA: Simple and Efficient Time Aware Retrieval Adaptation of MLLMs for Video Understanding

2025-12-15 16:38:59
Piyush Bagad, Andrew Zisserman

Abstract

Our objective is to build a general time-aware video-text embedding model for retrieval. To that end, we propose a simple and efficient recipe, dubbed TARA (Time Aware Retrieval Adaptation), to adapt Multimodal LLMs (MLLMs) to a time-aware video-text embedding model without using any video data at all. For evaluating time-awareness in retrieval, we propose a new benchmark with temporally opposite (chiral) actions as hard negatives and curated splits for chiral and non-chiral actions. We show that TARA outperforms all existing video-text models on this chiral benchmark while also achieving strong results on standard benchmarks. Furthermore, we discover additional benefits of TARA beyond time-awareness: (i) TARA embeddings are negation-aware as shown in NegBench benchmark that evaluates negation in video retrieval, (ii) TARA achieves state of the art performance on verb and adverb understanding in videos. Overall, TARA yields a strong, versatile, time-aware video-text embedding model with state of the art zero-shot performance.

Abstract (translated)

我们的目标是构建一个用于检索的时间感知型视频-文本嵌入模型。为此,我们提出了一种简单而高效的方案,称为TARA(Time Aware Retrieval Adaptation),可以在不使用任何视频数据的情况下将多模态大型语言模型(MLLMs)适应为时间感知型的视频-文本嵌入模型。为了评估检索中的时间感知性,我们提出了一个新的基准测试,该基准使用时间上相反的动作作为难例(chiral actions 的硬负样本),并针对时间相反和非时间相反的动作进行了精心设计的数据集划分。 结果显示,TARA在这一新的时间相反动作基准测试中优于所有现有的视频-文本模型,并且还在标准基准测试中取得了优异的成绩。此外,我们还发现了TARA除了具备时间感知性之外的额外优势:(i) TARA生成的嵌入是具有否定意识的,在NegBench基准测试(该测试评估视频检索中的否定语义)中得到了验证;(ii) TARA在理解视频中的动词和副词方面达到了最佳性能。 总体而言,TARA提供了一个强大的、多功能的时间感知型视频-文本嵌入模型,并且在零样本设置下表现出色。

URL

https://arxiv.org/abs/2512.13511

PDF

https://arxiv.org/pdf/2512.13511.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot