Abstract
Exploration efficiency poses a significant challenge in goal-conditioned reinforcement learning (GCRL) tasks, particularly those with long horizons and sparse rewards. A primary limitation to exploration efficiency is the agent's inability to leverage environmental structural patterns. In this study, we introduce a novel framework, GEASD, designed to capture these patterns through an adaptive skill distribution during the learning process. This distribution optimizes the local entropy of achieved goals within a contextual horizon, enhancing goal-spreading behaviors and facilitating deep exploration in states containing familiar structural patterns. Our experiments reveal marked improvements in exploration efficiency using the adaptive skill distribution compared to a uniform skill distribution. Additionally, the learned skill distribution demonstrates robust generalization capabilities, achieving substantial exploration progress in unseen tasks containing similar local structures.
Abstract (translated)
在目标条件强化学习(GCRL)任务中,特别是具有长距离和稀疏奖励的任务,探索效率面临着一个显著的挑战。探索效率的主要限制是代理程序无法利用环境结构模式。在这项研究中,我们引入了一个新的框架,GEASD,通过在学习过程中自适应地分配技能来捕捉这些模式。这种分布通过优化实现目标的局部熵,增强目标传播行为,并促进在包含熟悉结构模式的状态中进行深度探索。我们的实验表明,与使用均匀技能分布相比,自适应技能分布的探索效率明显得到了改善。此外,学习到的技能分布表现出稳健的泛化能力,在包含类似局部结构模式的新任务中,实现了显著的探索进步。
URL
https://arxiv.org/abs/2404.12999