Paper Reading AI Learner

Hierarchical Programmatic Reinforcement Learning via Learning to Compose Programs

2023-01-30 14:50:46
Guan-Ting Liu, En-Pei Hu, Pu-Jen Cheng, Hung-Yi Lee, Shao-Hua Sun

Abstract

Aiming to produce reinforcement learning (RL) policies that are human-interpretable and can generalize better to novel scenarios, Trivedi et al. (2021) present a method (LEAPS) that first learns a program embedding space to continuously parameterize diverse programs from a pre-generated program dataset, and then searches for a task-solving program in the learned program embedding space when given a task. Despite encouraging results, the program policies that LEAPS can produce are limited by the distribution of the program dataset. Furthermore, during searching, LEAPS evaluates each candidate program solely based on its return, failing to precisely reward correct parts of programs and penalize incorrect parts. To address these issues, we propose to learn a meta-policy that composes a series of programs sampled from the learned program embedding space. By composing programs, our proposed method can produce program policies that describe out-of-distributionally complex behaviors and directly assign credits to programs that induce desired behaviors. We design and conduct extensive experiments in the Karel domain. The experimental results show that our proposed framework outperforms baselines. The ablation studies confirm the limitations of LEAPS and justify our design choices.

Abstract (translated)

旨在生成可人类解释的强化学习(RL)策略,并且更好地泛化到新情境,Trivedi等人(2021)介绍了一种方法(LEAPS),该方法首先学习程序嵌入空间,从预先生成的程序数据集中不断参数化不同的程序,然后在特定任务中搜索在学习的程序嵌入空间中的解决问题程序。尽管取得了令人鼓舞的结果,但LEAPS能够产生的程序政策受到程序数据集分布的限制。此外,在搜索期间,LEAPS仅基于其返回值对每个备选程序进行评估,未能准确地奖励程序的正确部分,惩罚错误的部分。为了解决这些问题,我们建议学习一种元策略,该策略从学习的程序嵌入空间中选择一系列程序。通过选择程序,我们建议的方法能够产生描述非分布复杂行为的程序政策,并将 credit直接分配给诱导所需行为的程序。我们在卡瑞尔域设计和进行了广泛的实验,实验结果显示,我们提出的框架优于基准模型。消除研究的局限性证实了LEAPS的限制,并证明了我们的设计选择。

URL

https://arxiv.org/abs/2301.12950

PDF

https://arxiv.org/pdf/2301.12950.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot