Abstract
In deep reinforcement learning (RL) research, there has been a concerted effort to design more efficient and productive exploration methods while solving sparse-reward problems. These exploration methods often share common principles (e.g., improving diversity) and implementation details (e.g., intrinsic reward). Prior work found that non-stationary Markov decision processes (MDPs) require exploration to efficiently adapt to changes in the environment with online transfer learning. However, the relationship between specific exploration characteristics and effective transfer learning in deep RL has not been characterized. In this work, we seek to understand the relationships between salient exploration characteristics and improved performance and efficiency in transfer learning. We test eleven popular exploration algorithms on a variety of transfer types -- or ``novelties'' -- to identify the characteristics that positively affect online transfer learning. Our analysis shows that some characteristics correlate with improved performance and efficiency across a wide range of transfer tasks, while others only improve transfer performance with respect to specific environment changes. From our analysis, make recommendations about which exploration algorithm characteristics are best suited to specific transfer situations.
Abstract (translated)
在深度强化学习(RL)研究中,设计更高效和生产力强的探索方法以解决稀疏奖励问题一直是一个专注的努力。这些探索方法通常具有共同的原理(例如,提高多样性)和实现细节(例如,内生奖励)。先前的研究发现在使用在线迁移学习有效地适应环境变化的情况下,具有非平稳随机决策过程(MDPs)需要探索。然而,深度RL中显着探索特征与有效迁移学习之间的关系尚未被描述清楚。在本文中,我们试图了解显着探索特征与迁移学习效果之间的关系。我们测试了十一种流行的探索算法在各种迁移类型(或“新奇事物”)上的效果,以确定哪些特征对在线迁移学习的效果有积极影响。我们的分析显示,有些特征在广泛的迁移任务上与提高表现和效率相关,而另一些只与特定环境变化相关的迁移表现有关。从我们的分析中,关于哪种探索算法特性最适合特定迁移情况的建议。
URL
https://arxiv.org/abs/2404.02235