Abstract
Spatial-Spectral Mamba (SSM) improves computational efficiency and captures long-range dependencies, addressing Transformer limitations. However, traditional Mamba models overlook rich spectral information in HSIs and struggle with high dimensionality and sequential data. To address these issues, we propose the SSM with multi-head self-attention and token enhancement (MHSSMamba). This model integrates spectral and spatial information by enhancing spectral tokens and using multi-head attention to capture complex relationships between spectral bands and spatial locations. It also manages long-range dependencies and the sequential nature of HSI data, preserving contextual information across spectral bands. MHSSMamba achieved remarkable classification accuracies of 97.62\% on Pavia University, 96.92\% on the University of Houston, 96.85\% on Salinas, and 99.49\% on Wuhan-longKou datasets.
Abstract (translated)
空间-谱马哈(SSM)通过提高计算效率并捕捉长距离依赖解决了Transformer的局限性。然而,传统的Mamba模型忽视了HSI中丰富的光谱信息,并难以处理高维度和序列数据。为解决这些问题,我们提出了带有多头自注意力和标记增强(MHSSMamba)的SSM。这个模型通过增强光谱元词并使用多头注意力来捕捉光谱带和空间位置之间的复杂关系。它还管理长距离依赖和HSI数据的序列性质,保留跨光谱带的上下文信息。MHSSMamba在Pavia大学、休斯顿大学、Salinas和吴哥长宽高数据集上的分类准确度分别为97.62%、96.92%、96.85%和99.49%。
URL
https://arxiv.org/abs/2408.01224