Paper Reading AI Learner

VMambaCC: A Visual State Space Model for Crowd Counting

2024-05-07 03:30:57
Hao-Yuan Ma, Li Zhang, Shuai Shi

Abstract

As a deep learning model, Visual Mamba (VMamba) has a low computational complexity and a global receptive field, which has been successful applied to image classification and detection. To extend its applications, we apply VMamba to crowd counting and propose a novel VMambaCC (VMamba Crowd Counting) model. Naturally, VMambaCC inherits the merits of VMamba, or global modeling for images and low computational cost. Additionally, we design a Multi-head High-level Feature (MHF) attention mechanism for VMambaCC. MHF is a new attention mechanism that leverages high-level semantic features to augment low-level semantic features, thereby enhancing spatial feature representation with greater precision. Building upon MHF, we further present a High-level Semantic Supervised Feature Pyramid Network (HS2PFN) that progressively integrates and enhances high-level semantic information with low-level semantic information. Extensive experimental results on five public datasets validate the efficacy of our approach. For example, our method achieves a mean absolute error of 51.87 and a mean squared error of 81.3 on the ShangHaiTech\_PartA dataset. Our code is coming soon.

Abstract (translated)

作为一个深度学习模型,Visual Mamba (VMamba)具有较低的计算复杂性和全局接收域,这已经在图像分类和检测中取得了成功。为了扩展其应用,我们将VMamba应用于人群计数,并提出了一个新的VMambaCC(VMamba Crowd Counting)模型。VMambaCC继承了VMamba的优点,即全局建模图像和较低的计算成本。此外,我们还设计了一个多头高级特征(MHF)注意机制用于VMambaCC。MHF是一种新的注意机制,它利用高级语义特征来增加低级语义特征,从而通过提高精确度增强空间特征表示。在此基础上,我们进一步提出了一个高级语义监督特征金字塔网络(HS2PFN),该网络逐步整合和增强高级语义信息和低级语义信息。在五个公开数据集上的大量实验结果证实了我们的方法的有效性。例如,我们的方法在ShangHaiTech\_PartA数据集上的均绝对误差为51.87,均平方误差为81.3。我们的代码即将发布。

URL

https://arxiv.org/abs/2405.03978

PDF

https://arxiv.org/pdf/2405.03978.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot