Abstract
Temporal video grounding (TVG) is a critical task in video content understanding. Despite significant advancements, existing methods often limit in capturing the fine-grained relationships between multimodal inputs and the high computational costs with processing long video sequences. To address these limitations, we introduce a novel SpikeMba: multi-modal spiking saliency mamba for temporal video grounding. In our work, we integrate the Spiking Neural Networks (SNNs) and state space models (SSMs) to capture the fine-grained relationships of multimodal features effectively. Specifically, we introduce the relevant slots to enhance the model's memory capabilities, enabling a deeper contextual understanding of video sequences. The contextual moment reasoner leverages these slots to maintain a balance between contextual information preservation and semantic relevance exploration. Simultaneously, the spiking saliency detector capitalizes on the unique properties of SNNs to accurately locate salient proposals. Our experiments demonstrate the effectiveness of SpikeMba, which consistently outperforms state-of-the-art methods across mainstream benchmarks.
Abstract (translated)
时空视频 grounded (TVG) 是视频内容理解中的一个关键任务。尽管取得了一定的进展,但现有的方法通常在捕捉多模态输入与长视频序列的高计算成本之间存在局限。为了应对这些限制,我们引入了一种新颖的 SpikeMba:多模态尖峰突触自组织映射用于时空视频 grounded。在我们的工作中,我们将Spiking Neural Networks(SNNs) 和状态空间模型(SSMs)集成起来,有效地捕捉多模态特征之间的细粒度关系。具体来说,我们引入相关的插槽来增强模型的记忆能力,从而实现对视频序列的更深的上下文理解。上下文推理器利用这些插槽来保持上下文信息保留和语义相关性探索之间的平衡。同时,尖峰突触检测利用SNN的独特属性准确地定位突出建议。我们的实验证明了SpikeMba的有效性,其在主流基准测试中 consistently超越了最先进的方法。
URL
https://arxiv.org/abs/2404.01174