Abstract
As a deep learning model, Visual Mamba (VMamba) has a low computational complexity and a global receptive field, which has been successful applied to image classification and detection. To extend its applications, we apply VMamba to crowd counting and propose a novel VMambaCC (VMamba Crowd Counting) model. Naturally, VMambaCC inherits the merits of VMamba, or global modeling for images and low computational cost. Additionally, we design a Multi-head High-level Feature (MHF) attention mechanism for VMambaCC. MHF is a new attention mechanism that leverages high-level semantic features to augment low-level semantic features, thereby enhancing spatial feature representation with greater precision. Building upon MHF, we further present a High-level Semantic Supervised Feature Pyramid Network (HS2PFN) that progressively integrates and enhances high-level semantic information with low-level semantic information. Extensive experimental results on five public datasets validate the efficacy of our approach. For example, our method achieves a mean absolute error of 51.87 and a mean squared error of 81.3 on the ShangHaiTech\_PartA dataset. Our code is coming soon.
Abstract (translated)
作为一个深度学习模型,Visual Mamba (VMamba)具有较低的计算复杂性和全局接收域,这已经在图像分类和检测中取得了成功。为了扩展其应用,我们将VMamba应用于人群计数,并提出了一个新的VMambaCC(VMamba Crowd Counting)模型。VMambaCC继承了VMamba的优点,即全局建模图像和较低的计算成本。此外,我们还设计了一个多头高级特征(MHF)注意机制用于VMambaCC。MHF是一种新的注意机制,它利用高级语义特征来增加低级语义特征,从而通过提高精确度增强空间特征表示。在此基础上,我们进一步提出了一个高级语义监督特征金字塔网络(HS2PFN),该网络逐步整合和增强高级语义信息和低级语义信息。在五个公开数据集上的大量实验结果证实了我们的方法的有效性。例如,我们的方法在ShangHaiTech\_PartA数据集上的均绝对误差为51.87,均平方误差为81.3。我们的代码即将发布。
URL
https://arxiv.org/abs/2405.03978