Abstract
One of the objectives of continual learning is to prevent catastrophic forgetting in learning multiple tasks sequentially, and the existing solutions have been driven by the conceptualization of the plasticity-stability dilemma. However, the convergence of continual learning for each sequential task is less studied so far. In this paper, we provide a convergence analysis of memory-based continual learning with stochastic gradient descent and empirical evidence that training current tasks causes the cumulative degradation of previous tasks. We propose an adaptive method for nonconvex continual learning (NCCL), which adjusts step sizes of both previous and current tasks with the gradients. The proposed method can achieve the same convergence rate as the SGD method when the catastrophic forgetting term which we define in the paper is suppressed at each iteration. Further, we demonstrate that the proposed algorithm improves the performance of continual learning over existing methods for several image classification tasks.
Abstract (translated)
持续学习的一个目标是防止在按顺序学习多个任务时出现灾难性遗忘,现有解决方案的动力源于塑性-稳定困境的概念。然而,目前对每个任务上连续学习收敛的研究还比较少。在本文中,我们提供了基于记忆的连续学习与随机梯度下降的收敛分析,并给出了实证证据,即训练当前任务会使得以前任务的累积退化。我们提出了一个自适应的连续学习(NCCL)方法,该方法根据梯度调整前一个和当前任务的步长。当我们在每个迭代中抑制我们定义在论文中的灾难性遗忘项时,与SGD方法相同的收敛率。此外,我们还证明了所提出的算法在多个图像分类任务上的性能优于现有方法。
URL
https://arxiv.org/abs/2404.05555