Abstract
Class-Incremental Learning updates a deep classifier with new categories while maintaining the previously observed class accuracy. Regularizing the neural network weights is a common method to prevent forgetting previously learned classes while learning novel ones. However, existing regularizers use a constant magnitude throughout the learning sessions, which may not reflect the varying levels of difficulty of the tasks encountered during incremental learning. This study investigates the necessity of adaptive regularization in Class-Incremental Learning, which dynamically adjusts the regularization strength according to the complexity of the task at hand. We propose a Bayesian Optimization-based approach to automatically determine the optimal regularization magnitude for each learning task. Our experiments on two datasets via two regularizers demonstrate the importance of adaptive regularization for achieving accurate and less forgetful visual incremental learning.
Abstract (translated)
类别增量学习在更新深度分类器中的新类别的同时,保持了之前观察到的分类精度。 regularizing 神经网络权重是一种常见的方法,以防止在学习新类别时忘记之前学习过的类别。然而,现有的 regularizer 在整个学习过程中使用了一个恒定的数值,这可能会不反映增量学习中遇到的任务难度的不断变化。 本研究调查了类别增量学习中自适应 regularization 的必要性,该方法根据当前任务的复杂性动态地调整 regularization 强度。我们提出了一种基于贝叶斯优化的方法,以自动确定每个学习任务的最优 regularizer 强度。我们的实验通过两个Regularizer 对两个数据集进行了展示,以证明自适应 regularization 对实现准确且较少忘记的可视化增量学习的重要性。
URL
https://arxiv.org/abs/2303.13113