Abstract
Deep representation learning methods struggle with continual learning, suffering from both catastrophic forgetting of useful units and loss of plasticity, often due to rigid and unuseful units. While many methods address these two issues separately, only a few currently deal with both simultaneously. In this paper, we introduce Utility-based Perturbed Gradient Descent (UPGD) as a novel approach for the continual learning of representations. UPGD combines gradient updates with perturbations, where it applies smaller modifications to more useful units, protecting them from forgetting, and larger modifications to less useful units, rejuvenating their plasticity. We use a challenging streaming learning setup where continual learning problems have hundreds of non-stationarities and unknown task boundaries. We show that many existing methods suffer from at least one of the issues, predominantly manifested by their decreasing accuracy over tasks. On the other hand, UPGD continues to improve performance and surpasses or is competitive with all methods in all problems. Finally, in extended reinforcement learning experiments with PPO, we show that while Adam exhibits a performance drop after initial learning, UPGD avoids it by addressing both continual learning issues.
Abstract (translated)
深度表示学习方法在持续学习方面存在困难,常常由于刚性和无用的单元而受到损失。虽然许多方法分别解决了这两个问题,但只有少数方法同时处理这两个问题。在本文中,我们引入了一种名为 Utility-based Perturbed Gradient Descent (UPGD) 的新的方法,作为用于连续学习表示的新颖方法。UPGD 将梯度更新与扰动相结合,对更有用的单元应用较小的修改,以保护它们不遗忘,对更无用的单元应用较大的修改,以恢复它们的塑料性。我们使用具有挑战性的流式学习设置,其中连续学习问题具有数百个非平稳性和未知的任务边界。我们证明了大多数现有方法至少存在一个这些问题,主要表现在它们在任务上的准确性下降。另一方面,UPGD 在所有问题上都持续改进并超过了或与所有方法竞争。最后,在扩展的强化学习实验中使用PPO,我们证明了,尽管Adam在初始学习后表现下降,但UPGD通过解决连续学习和无用性问题来避免这种情况。
URL
https://arxiv.org/abs/2404.00781