Abstract
Continual Learning (CL) aims at incrementally learning new tasks without forgetting the knowledge acquired from old ones. Experience Replay (ER) is a simple and effective rehearsal-based strategy, which optimizes the model with current training data and a subset of old samples stored in a memory buffer. To further reduce forgetting, recent approaches extend ER with various techniques, such as model regularization and memory sampling. However, the prediction consistency between the new model and the old one on current training data has been seldom explored, resulting in less knowledge preserved when few previous samples are available. To address this issue, we propose a CL method with Strong Experience Replay (SER), which additionally utilizes future experiences mimicked on the current training data, besides distilling past experience from the memory buffer. In our method, the updated model will produce approximate outputs as its original ones, which can effectively preserve the acquired knowledge. Experimental results on multiple image classification datasets show that our SER method surpasses the state-of-the-art methods by a noticeable margin.
Abstract (translated)
持续学习(CL)的目标是逐步学习新任务,而不忘记从旧任务中获取的知识。经验回放(ER)是一种简单而有效的模拟策略,它使用当前训练数据和存储在内存缓冲中的旧样本来优化模型。为了进一步减少遗忘,最近的一些方法使用各种技术,如模型正则化和情感分析,来扩展ER。然而,在新模型和旧模型在当前训练数据上的预测一致性方面,很少去探索,因此在可用旧样本较少时,保留的知识较少。为了解决这一问题,我们提出了一种使用强经验回放(SER)的CL方法,它同时利用当前训练数据模拟的未来经验,并从内存缓冲中摘要过去的经验。在我们的方法中,更新的模型将产生与原始模型相似的近似输出,这可以有效保留获取的知识。多个图像分类数据集的实验结果显示,我们的 SER 方法比当前最先进的方法高出显著 margin。
URL
https://arxiv.org/abs/2305.13622