Paper Reading AI Learner

SpikeMba: Multi-Modal Spiking Saliency Mamba for Temporal Video Grounding

2024-04-01 15:26:44
Wenrui Li, Xiaopeng Hong, Xiaopeng Fan

Abstract

Temporal video grounding (TVG) is a critical task in video content understanding. Despite significant advancements, existing methods often limit in capturing the fine-grained relationships between multimodal inputs and the high computational costs with processing long video sequences. To address these limitations, we introduce a novel SpikeMba: multi-modal spiking saliency mamba for temporal video grounding. In our work, we integrate the Spiking Neural Networks (SNNs) and state space models (SSMs) to capture the fine-grained relationships of multimodal features effectively. Specifically, we introduce the relevant slots to enhance the model's memory capabilities, enabling a deeper contextual understanding of video sequences. The contextual moment reasoner leverages these slots to maintain a balance between contextual information preservation and semantic relevance exploration. Simultaneously, the spiking saliency detector capitalizes on the unique properties of SNNs to accurately locate salient proposals. Our experiments demonstrate the effectiveness of SpikeMba, which consistently outperforms state-of-the-art methods across mainstream benchmarks.

Abstract (translated)

时空视频 grounded (TVG) 是视频内容理解中的一个关键任务。尽管取得了一定的进展,但现有的方法通常在捕捉多模态输入与长视频序列的高计算成本之间存在局限。为了应对这些限制,我们引入了一种新颖的 SpikeMba:多模态尖峰突触自组织映射用于时空视频 grounded。在我们的工作中,我们将Spiking Neural Networks(SNNs) 和状态空间模型(SSMs)集成起来,有效地捕捉多模态特征之间的细粒度关系。具体来说,我们引入相关的插槽来增强模型的记忆能力,从而实现对视频序列的更深的上下文理解。上下文推理器利用这些插槽来保持上下文信息保留和语义相关性探索之间的平衡。同时,尖峰突触检测利用SNN的独特属性准确地定位突出建议。我们的实验证明了SpikeMba的有效性,其在主流基准测试中 consistently超越了最先进的方法。

URL

https://arxiv.org/abs/2404.01174

PDF

https://arxiv.org/pdf/2404.01174.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot