Abstract
Continual semantic segmentation aims to learn new classes while maintaining the information from the previous classes. Although prior studies have shown impressive progress in recent years, the fairness concern in the continual semantic segmentation needs to be better addressed. Meanwhile, fairness is one of the most vital factors in deploying the deep learning model, especially in human-related or safety applications. In this paper, we present a novel Fairness Continual Learning approach to the semantic segmentation problem. In particular, under the fairness objective, a new fairness continual learning framework is proposed based on class distributions. Then, a novel Prototypical Contrastive Clustering loss is proposed to address the significant challenges in continual learning, i.e., catastrophic forgetting and background shift. Our proposed loss has also been proven as a novel, generalized learning paradigm of knowledge distillation commonly used in continual learning. Moreover, the proposed Conditional Structural Consistency loss further regularized the structural constraint of the predicted segmentation. Our proposed approach has achieved State-of-the-Art performance on three standard scene understanding benchmarks, i.e., ADE20K, Cityscapes, and Pascal VOC, and promoted the fairness of the segmentation model.
Abstract (translated)
持续语义分割的目标是在学习新类别的同时保留之前类别的信息。虽然过去的研究已经表明近年来取得了令人印象深刻的进展,但在持续语义分割中公平性的问题需要更好的解决。与此同时,公平性是部署深度学习模型中最重要的因素之一,特别是在与人类相关的或安全应用领域。在本文中,我们提出了一种新颖的公平持续学习方法来解决语义分割问题。特别是,在公平性目标下,我们基于类别分布提出了一种新的公平持续学习框架。然后,我们提出了一种独特的典型对比聚类 loss,以解决持续学习中的重大挑战,即灾难性遗忘和背景移动。我们提出的 loss 也被证明是常用于持续学习中的知识蒸馏的学习范式,具有普遍的通用性。此外,我们提出的条件结构一致性 loss 进一步 regularized 预测分割的结构限制,我们的新方法在三个标准场景理解基准测试中取得了最先进的表现,即 ADE20K、城市景观和Pascal VOC 测试,并促进了分割模型的公平性。
URL
https://arxiv.org/abs/2305.15700