Abstract
Hyperspectral image (HSI) classification presents unique challenges due to its high spectral dimensionality and limited labeled data. Traditional deep learning models often suffer from overfitting and high computational costs. Self-distillation (SD), a variant of knowledge distillation where a network learns from its own predictions, has recently emerged as a promising strategy to enhance model performance without requiring external teacher networks. In this work, we explore the application of SD to HSI by treating earlier outputs as soft targets, thereby enforcing consistency between intermediate and final predictions. This process improves intra-class compactness and inter-class separability in the learned feature space. Our approach is validated on two benchmark HSI datasets and demonstrates significant improvements in classification accuracy and robustness, highlighting the effectiveness of SD for spectral-spatial learning. Codes are available at this https URL.
Abstract (translated)
高光谱图像(HSI)分类因其高光谱维度和有限的标注数据而面临独特的挑战。传统深度学习模型通常会出现过拟合现象,并且计算成本较高。自我蒸馏(Self-Distillation, SD)是一种知识蒸馏变体,其中网络从自身的预测中学习,在无需外部教师网络的情况下,已成为提升模型性能的一项有前景的战略。在这项工作中,我们探讨了将SD应用于HSI的方法,通过利用早期输出作为软目标来强制执行中间预测和最终预测之间的一致性。这一过程可以增强所学特征空间内的类内紧凑性和类间可分离性。我们的方法已在两个基准HSI数据集上进行了验证,并展示了分类准确率和鲁棒性的显著提高,突显了SD在光谱-空间学习中的有效性。代码可在上述链接中获取。
URL
https://arxiv.org/abs/2601.07416