Abstract
We study a class of reinforcement learning problems where the reward signals for policy learning are generated by a discriminator that is dependent on and jointly optimized with the policy. This interdependence between the policy and the discriminator leads to an unstable learning process because reward signals from an immature discriminator are noisy and impede policy learning, and conversely, an untrained policy impedes discriminator learning. We call this learning setting $\textit{Internally Rewarded Reinforcement Learning}$ (IRRL) as the reward is not provided directly by the environment but $\textit{internally}$ by the discriminator. In this paper, we formally formulate IRRL and present a class of problems that belong to IRRL. We theoretically derive and empirically analyze the effect of the reward function in IRRL and based on these analyses propose the clipped linear reward function. Experimental results show that the proposed reward function can consistently stabilize the training process by reducing the impact of reward noise, which leads to faster convergence and higher performance compared with baselines in diverse tasks.
Abstract (translated)
我们对一类强化学习问题进行研究,其中 policy 学习的奖励信号是由一个与 policy 互相依赖并共同优化的鉴别器生成的。这种 policy 和鉴别器之间的依赖关系导致不稳定的学习过程,因为不成熟的鉴别器的奖励信号噪声很大,阻碍 policy 学习,反之,未训练的 policy 阻碍 discriminator 学习。我们称之为 $ extit{内部奖励强化学习}$(IRRL),因为奖励不是由环境直接提供,而是由鉴别器内部提供的。在本文中,我们正式定义了 IRRL 并介绍了属于 IRRL 的问题类型。我们理论推导和实证分析了 IRRL 中的奖励函数的影响,并基于这些分析提出了一个裁剪线性奖励函数。实验结果表明,提出的奖励函数可以 consistently 稳定训练过程,通过减少奖励噪声的影响,从而加快收敛并提高在各种任务中的基础水平的性能。
URL
https://arxiv.org/abs/2302.00270