Abstract
Text-video retrieval aims to find the most relevant cross-modal samples for a given query. Recent methods focus on modeling the whole spatial-temporal relations. However, since video clips contain more diverse content than captions, the model aligning these asymmetric video-text pairs has a high risk of retrieving many false positive results. In this paper, we propose Probabilistic Token Aggregation (\textit{ProTA}) to handle cross-modal interaction with content asymmetry. Specifically, we propose dual partial-related aggregation to disentangle and re-aggregate token representations in both low-dimension and high-dimension spaces. We propose token-based probabilistic alignment to generate token-level probabilistic representation and maintain the feature representation diversity. In addition, an adaptive contrastive loss is proposed to learn compact cross-modal distribution space. Based on extensive experiments, \textit{ProTA} achieves significant improvements on MSR-VTT (50.9%), LSMDC (25.8%), and DiDeMo (47.2%).
Abstract (translated)
文本-视频检索的目的是找到与给定查询最相关的跨模态样本。最近的方法集中于建模整个空间-时间关系。然而,由于视频片段包含比字幕更丰富的内容,因此模型对 these 不对称视频-文本对进行对齐有很高的风险,可能导致许多假阳性结果。在本文中,我们提出概率词聚合(ProTA)来处理跨模态交互中的内容差异。具体来说,我们提出了一种 dual partial-related aggregation 来解离和重新聚合低维度和高维度空间中的标记词表示。我们提出基于标记词的概率对齐来生成标记级概率表示,并保持特征表示多样性。此外,还提出了一种自适应对比损失来学习紧凑的跨模态分布空间。通过广泛的实验,ProTA在 MSR-VTT(50.9%)、LSMDC(25.8%)和 DiDeMo(47.2%)上取得了显著的改进。
URL
https://arxiv.org/abs/2404.12216