Abstract
A growing body of evidence suggests that neural networks employed in deep reinforcement learning (RL) gradually lose their plasticity, the ability to learn from new data; however, the analysis and mitigation of this phenomenon is hampered by the complex relationship between plasticity, exploration, and performance in RL. This paper introduces plasticity injection, a minimalistic intervention that increases the network plasticity without changing the number of trainable parameters or biasing the predictions. The applications of this intervention are two-fold: first, as a diagnostic tool $\unicode{x2014}$ if injection increases the performance, we may conclude that an agent's network was losing its plasticity. This tool allows us to identify a subset of Atari environments where the lack of plasticity causes performance plateaus, motivating future studies on understanding and combating plasticity loss. Second, plasticity injection can be used to improve the computational efficiency of RL training if the agent has to re-learn from scratch due to exhausted plasticity or by growing the agent's network dynamically without compromising performance. The results on Atari show that plasticity injection attains stronger performance compared to alternative methods while being computationally efficient.
Abstract (translated)
越来越多的证据表明,用于深度强化学习的神经网络逐渐失去了可塑性,即从新数据中学习的能力;然而,对这种现象的分析和缓解受到RL中可塑性、探索和表现复杂关系的制约。本文介绍了可塑性注入,这是一种简单的干预,可以增加网络的可塑性而无需改变训练参数的数量或偏置预测。该干预的应用有两个:第一,作为诊断工具,如果注入可以增加性能,我们可以得出结论,即代理的网络正在失去可塑性。该工具可以识别Atari环境中缺乏可塑性导致性能停滞不前的特定子集,激励未来研究理解并对抗可塑性损失。第二,可塑性注入可以用于改进RL训练的计算效率,如果代理因缺乏可塑性而必须从新学习或因动态增长代理网络而无需牺牲性能。Atari的结果表明,可塑性注入比其他任何方法都实现了更强的性能。
URL
https://arxiv.org/abs/2305.15555