Abstract
Video moment retrieval and highlight detection are two highly valuable tasks in video understanding, but until recently they have been jointly studied. Although existing studies have made impressive advancement recently, they predominantly follow the data-driven bottom-up paradigm. Such paradigm overlooks task-specific and inter-task effects, resulting in poor model performance. In this paper, we propose a novel task-driven top-down framework TaskWeave for joint moment retrieval and highlight detection. The framework introduces a task-decoupled unit to capture task-specific and common representations. To investigate the interplay between the two tasks, we propose an inter-task feedback mechanism, which transforms the results of one task as guiding masks to assist the other task. Different from existing methods, we present a task-dependent joint loss function to optimize the model. Comprehensive experiments and in-depth ablation studies on QVHighlights, TVSum, and Charades-STA datasets corroborate the effectiveness and flexibility of the proposed framework. Codes are available at this https URL.
Abstract (translated)
视频时刻检索和突出检测是视频理解中两个非常有价值的任务,但迄今为止,它们主要遵循数据驱动的自下而上的范式。这种范式忽视了任务特定和跨任务影响,导致模型性能不佳。在本文中,我们提出了一种全新的基于任务的自上而下框架TaskWeave,用于联合时刻检索和突出检测。框架引入了一个任务解耦的单元来捕捉任务特定和共同表示。为了研究这两个任务之间的相互作用,我们提出了一个跨任务反馈机制,将一个任务的结果作为引导mask辅助另一个任务。与现有方法不同,我们提出了一个基于任务的任务损失函数来优化模型。对QVHighlights、TVSum和Charades-STA数据集的全面实验和深入消融研究证实了所提出的框架的有效性和灵活性。代码可在此处访问:https://www.huaweicf.com/TaskWeave
URL
https://arxiv.org/abs/2404.09263