Abstract
Current reinforcement learning (RL) often suffers when solving a challenging exploration problem where the desired outcomes or high rewards are rarely observed. Even though curriculum RL, a framework that solves complex tasks by proposing a sequence of surrogate tasks, shows reasonable results, most of the previous works still have difficulty in proposing curriculum due to the absence of a mechanism for obtaining calibrated guidance to the desired outcome state without any prior domain knowledge. To alleviate it, we propose an uncertainty & temporal distance-aware curriculum goal generation method for the outcome-directed RL via solving a bipartite matching problem. It could not only provide precisely calibrated guidance of the curriculum to the desired outcome states but also bring much better sample efficiency and geometry-agnostic curriculum goal proposal capability compared to previous curriculum RL methods. We demonstrate that our algorithm significantly outperforms these prior methods in a variety of challenging navigation tasks and robotic manipulation tasks in a quantitative and qualitative way.
Abstract (translated)
当前强化学习(RL)在解决难以探索的问题时往往遇到困难,其中预期的结果或高奖励很少观察到。尽管课程强化学习(RL)框架通过提出一系列替代任务来解决复杂的任务,但结果仍然相当合理,而以往的工作仍然难以提出课程,因为缺乏在没有先前领域知识的情况下获得校准的指导向预期结果状态。为了减轻这种情况,我们提出了一种不确定性和时间距离aware的课程目标生成方法,通过解决一对交互式的匹配问题。它不仅可以精确校准课程向预期结果状态提供校准的指导,而且比以往的课程强化学习方法在样本效率和几何无关的课程目标提议能力方面表现更好。我们证明了我们的算法在多种挑战性的导航任务和机器人操作任务中以定量和定性方式显著优于这些以前的方法。
URL
https://arxiv.org/abs/2301.11741