Abstract
A thorough comprehension of textual data is a fundamental element in multi-modal video analysis tasks. However, recent works have shown that the current models do not achieve a comprehensive understanding of the textual data during the training for the target downstream tasks. Orthogonal to the previous approaches to this limitation, we postulate that understanding the significance of the sentence components according to the target task can potentially enhance the performance of the models. Hence, we utilize the knowledge of a pre-trained large language model (LLM) to generate text samples from the original ones, targeting specific sentence components. We propose a weakly supervised importance estimation module to compute the relative importance of the components and utilize them to improve different video-language tasks. Through rigorous quantitative analysis, our proposed method exhibits significant improvement across several video-language tasks. In particular, our approach notably enhances video-text retrieval by a relative improvement of 8.3\% in video-to-text and 1.4\% in text-to-video retrieval over the baselines, in terms of R@1. Additionally, in video moment retrieval, average mAP shows a relative improvement ranging from 2.0\% to 13.7 \% across different baselines.
Abstract (translated)
对文本数据的深入理解是多模态视频分析任务中的基本要素。然而,最近的工作表明,当前的模型在目标下游任务的训练过程中无法全面理解文本数据。与以前的方法不同,我们假设根据目标任务理解句子成分可能有助于提高模型的性能。因此,我们利用预训练的大型语言模型(LLM)生成针对原始文本的文本样本,针对特定的句子成分进行定向。我们提出了一个弱监督的重要性估计模块,计算组件的相对重要性,并利用它们来改善不同的视频-语言任务。通过严格的定量分析,我们提出的方法在多个视频-语言任务上都取得了显著的改进。特别是,我们的方法在视频-文本检索方面显著增强了视频-到-文本和文本-到-视频检索的相对改善率,在R@1方面,相对改善了8.3%。此外,在视频时刻检索中,平均mAP在不同的基线之间显示出相对改善,从2.0%到13.7%不等。
URL
https://arxiv.org/abs/2312.06699