Abstract
Text-video retrieval is a challenging task that aims to identify relevant videos given textual queries. Compared to conventional textual retrieval, the main obstacle for text-video retrieval is the semantic gap between the textual nature of queries and the visual richness of video content. Previous works primarily focus on aligning the query and the video by finely aggregating word-frame matching signals. Inspired by the human cognitive process of modularly judging the relevance between text and video, the judgment needs high-order matching signal due to the consecutive and complex nature of video contents. In this paper, we propose chunk-level text-video matching, where the query chunks are extracted to describe a specific retrieval unit, and the video chunks are segmented into distinct clips from videos. We formulate the chunk-level matching as n-ary correlations modeling between words of the query and frames of the video and introduce a multi-modal hypergraph for n-ary correlation modeling. By representing textual units and video frames as nodes and using hyperedges to depict their relationships, a multi-modal hypergraph is constructed. In this way, the query and the video can be aligned in a high-order semantic space. In addition, to enhance the model's generalization ability, the extracted features are fed into a variational inference component for computation, obtaining the variational representation under the Gaussian distribution. The incorporation of hypergraphs and variational inference allows our model to capture complex, n-ary interactions among textual and visual contents. Experimental results demonstrate that our proposed method achieves state-of-the-art performance on the text-video retrieval task.
Abstract (translated)
文本-视频检索是一个具有挑战性的任务,旨在根据文本查询识别相关的视频。与传统的文本检索相比,文本-视频检索的主要障碍是查询的文本性质和视频内容的视觉丰富性之间的语义差距。以前的工作主要集中在通过精细聚合词帧匹配信号将查询和视频对齐。受到人类在判断文本和视频之间的相关性时采用模块化判断过程的启发,由于视频内容的连续和复杂性,判断需要高阶匹配信号。在本文中,我们提出了一种基于块级的文本-视频匹配,其中查询块被提取以描述特定的检索单位,视频块被分割成来自视频的片段。我们将块级匹配建模为词查询和视频帧之间的n阶相关性建模,并引入了n阶相关性建模的多模态超图。通过将文本单元和视频帧表示为节点,并使用边来描绘它们之间的关系,得到了一个n阶相关性建模的多模态超图。这样,查询和视频可以在高阶语义空间对齐。此外,为了提高模型的泛化能力,提取的特征被输入计算机制以获得高斯分布下的变分表示。超图和变分推理的引入使得我们的模型能够捕捉文本和视觉内容之间的复杂n阶交互。实验结果表明,我们提出的方法在文本-视频检索任务上实现了最先进的性能。
URL
https://arxiv.org/abs/2401.03177