Abstract
Temporal sentence grounding involves the retrieval of a video moment with a natural language query. Many existing works directly incorporate the given video and temporally localized query for temporal grounding, overlooking the inherent domain gap between different modalities. In this paper, we utilize pseudo-query features containing extensive temporally global textual knowledge sourced from the same video-query pair, to enhance the bridging of domain gaps and attain a heightened level of similarity between multi-modal features. Specifically, we propose a Pseudo-query Intermediary Network (PIN) to achieve an improved alignment of visual and comprehensive pseudo-query features within the feature space through contrastive learning. Subsequently, we utilize learnable prompts to encapsulate the knowledge of pseudo-queries, propagating them into the textual encoder and multi-modal fusion module, further enhancing the feature alignment between visual and language for better temporal grounding. Extensive experiments conducted on the Charades-STA and ActivityNet-Captions datasets demonstrate the effectiveness of our method.
Abstract (translated)
翻译 Temporal sentence grounding 涉及从自然语言查询中检索具有自然语言查询的视频时刻。 许多现有作品直接包含给定的视频和时间局部化的查询以进行时间关联,而忽视了不同模态之间固有的领域差距。在本文中,我们利用包含相同视频查询对中广泛的时间全局文本知识伪查询特征,来增强领域之间的桥梁,达到多模态特征之间更高的相似度。 具体来说,我们提出了一种名为 Pseudo-query Intermediary Network (PIN) 的伪查询中间网络,通过对比学习在特征空间中改善视觉和综合伪查询特征的对齐。 然后,我们使用可学习提示来封装伪查询的知识,将其传递到文本编码器和多模态融合模块中,进一步加强了视觉和语言之间的特征匹配,实现更好的时间关联。 在 Charades-STA 和 ActivityNet-Captions 数据集上进行的大量实验证明了我们方法的有效性。
URL
https://arxiv.org/abs/2404.13611