Abstract
Image segmentation based on continual learning exhibits a critical drop of performance, mainly due to catastrophic forgetting and background shift, as they are required to incorporate new classes continually. In this paper, we propose a simple, yet effective Continual Image Segmentation method with incremental Dynamic Query (CISDQ), which decouples the representation learning of both old and new knowledge with lightweight query embedding. CISDQ mainly includes three contributions: 1) We define dynamic queries with adaptive background class to exploit past knowledge and learn future classes naturally. 2) CISDQ proposes a class/instance-aware Query Guided Knowledge Distillation strategy to overcome catastrophic forgetting by capturing the inter-class diversity and intra-class identity. 3) Apart from semantic segmentation, CISDQ introduce the continual learning for instance segmentation in which instance-wise labeling and supervision are considered. Extensive experiments on three datasets for two tasks (i.e., continual semantic and instance segmentation are conducted to demonstrate that CISDQ achieves the state-of-the-art performance, specifically, obtaining 4.4% and 2.9% mIoU improvements for the ADE 100-10 (6 steps) setting and ADE 100-5 (11 steps) setting.
Abstract (translated)
基于持续学习的图像分割表现出关键的性能下降,主要原因是灾难性遗忘和背景迁移,因为它们需要不断包含新类。在本文中,我们提出了一种简单而有效的连续图像分割方法:连续动态查询(CISDQ),它通过轻量级查询嵌入解耦了旧知识和新知识的表示学习。CISDQ主要包括以下三个贡献:1)我们定义了动态查询带有自适应背景类,以利用过去知识并自然地学习未来类。2)CISDQ提出了一个类/实例感知的有指导的 knowledge distillation策略,通过捕获类间多样性来克服灾难性遗忘。3)除了语义分割,CISDQ 在实例分割中引入了持续学习,其中实例级别的标注和监督被考虑在内。在三个数据集上进行两个任务的实验(即连续语义和实例分割)以证明 CISDQ 达到最先进的性能,特别是获得 ADE 100-10(6 步)设置下的 4.4% 和 ADE 100-5(11 步)设置下的 mIoU 改进。
URL
https://arxiv.org/abs/2311.17450