Abstract
Given a model well-trained with a large-scale base dataset, Few-Shot Class-Incremental Learning (FSCIL) aims at incrementally learning novel classes from a few labeled samples by avoiding overfitting, without catastrophically forgetting all encountered classes previously. Currently, semi-supervised learning technique that harnesses freely-available unlabeled data to compensate for limited labeled data can boost the performance in numerous vision tasks, which heuristically can be applied to tackle issues in FSCIL, i.e., the Semi-supervised FSCIL (Semi-FSCIL). So far, very limited work focuses on the Semi-FSCIL task, leaving the adaptability issue of semi-supervised learning to the FSCIL task unresolved. In this paper, we focus on this adaptability issue and present a simple yet efficient Semi-FSCIL framework named Uncertainty-aware Distillation with Class-Equilibrium (UaD-CE), encompassing two modules UaD and CE. Specifically, when incorporating unlabeled data into each incremental session, we introduce the CE module that employs a class-balanced self-training to avoid the gradual dominance of easy-to-classified classes on pseudo-label generation. To distill reliable knowledge from the reference model, we further implement the UaD module that combines uncertainty-guided knowledge refinement with adaptive distillation. Comprehensive experiments on three benchmark datasets demonstrate that our method can boost the adaptability of unlabeled data with the semi-supervised learning technique in FSCIL tasks.
Abstract (translated)
给定一个已经通过大型基集数据集训练好的模型,少量样本Class-Incremental Learning(FSCIL)的目标是从几个标记样本中逐步学习新的类,以避免过拟合,同时不会灾难性地忘记之前遇到的所有类。目前,使用自由可用的未标记数据来弥补标记数据不足的半监督学习技术可以提升许多视觉任务的性能,这启发式地可以用于解决FSCIL问题,即半监督FSCIL(半监督FSCIL)。迄今为止,非常少量的工作主要集中在半监督FSCIL任务上,未解决半监督学习的问题,留下了FSCIL任务中的适应性问题。在本文中,我们关注这个适应性问题并提出了名为Uncertainty-aware Distillation with Class-Equilibrium(UaD-CE)的简单但高效的半监督FSCIL框架,包括UaD和CE两个模块。具体来说,在每次增量会议上将未标记数据引入时,我们引入了CE模块,采用类平衡的自训练,以避免伪标签生成中易于分类的类逐渐占据主导地位。为了从参考模型中提取可靠的知识,我们进一步实现了UaD模块,它结合不确定性引导的知识重构与自适应蒸馏。对三个基准数据集的全面实验表明,我们的方法可以在FSCIL任务中使用半监督学习技术提升未标记数据的灵活性。
URL
https://arxiv.org/abs/2301.09964