Abstract
Aiming to produce reinforcement learning (RL) policies that are human-interpretable and can generalize better to novel scenarios, Trivedi et al. (2021) present a method (LEAPS) that first learns a program embedding space to continuously parameterize diverse programs from a pre-generated program dataset, and then searches for a task-solving program in the learned program embedding space when given a task. Despite encouraging results, the program policies that LEAPS can produce are limited by the distribution of the program dataset. Furthermore, during searching, LEAPS evaluates each candidate program solely based on its return, failing to precisely reward correct parts of programs and penalize incorrect parts. To address these issues, we propose to learn a meta-policy that composes a series of programs sampled from the learned program embedding space. By composing programs, our proposed method can produce program policies that describe out-of-distributionally complex behaviors and directly assign credits to programs that induce desired behaviors. We design and conduct extensive experiments in the Karel domain. The experimental results show that our proposed framework outperforms baselines. The ablation studies confirm the limitations of LEAPS and justify our design choices.
Abstract (translated)
旨在生成可人类解释的强化学习(RL)策略,并且更好地泛化到新情境,Trivedi等人(2021)介绍了一种方法(LEAPS),该方法首先学习程序嵌入空间,从预先生成的程序数据集中不断参数化不同的程序,然后在特定任务中搜索在学习的程序嵌入空间中的解决问题程序。尽管取得了令人鼓舞的结果,但LEAPS能够产生的程序政策受到程序数据集分布的限制。此外,在搜索期间,LEAPS仅基于其返回值对每个备选程序进行评估,未能准确地奖励程序的正确部分,惩罚错误的部分。为了解决这些问题,我们建议学习一种元策略,该策略从学习的程序嵌入空间中选择一系列程序。通过选择程序,我们建议的方法能够产生描述非分布复杂行为的程序政策,并将 credit直接分配给诱导所需行为的程序。我们在卡瑞尔域设计和进行了广泛的实验,实验结果显示,我们提出的框架优于基准模型。消除研究的局限性证实了LEAPS的限制,并证明了我们的设计选择。
URL
https://arxiv.org/abs/2301.12950