Abstract
Existing Deep Reinforcement Learning (DRL) algorithms suffer from sample inefficiency. Generally, episodic control-based approaches are solutions that leverage highly-rewarded past experiences to improve sample efficiency of DRL algorithms. However, previous episodic control-based approaches fail to utilize the latent information from the historical behaviors (e.g., state transitions, topological similarities, etc.) and lack scalability during DRL training. This work introduces Neural Episodic Control with State Abstraction (NECSA), a simple but effective state abstraction-based episodic control containing a more comprehensive episodic memory, a novel state evaluation, and a multi-step state analysis. We evaluate our approach to the MuJoCo and Atari tasks in OpenAI gym domains. The experimental results indicate that NECSA achieves higher sample efficiency than the state-of-the-art episodic control-based approaches. Our data and code are available at the project website\footnote{\url{this https URL}}.
Abstract (translated)
现有的深度学习强化学习算法(DRL)算法存在样本效率问题。一般来说,基于事件控制的算法是借助过去的高奖励经验来提高DRL算法样本效率的解决方案。然而,之前的基于事件控制的方法未能充分利用历史行为中的隐含信息(例如状态转换、拓扑相似性等),并且在DRL训练过程中缺乏可扩展性。本研究介绍了基于状态抽象(state Abstraction)的神经网络事件控制(NECSA),这是一种简单但有效的状态抽象基于事件控制,包含更完整的事件记忆、新型状态评估和多步状态分析。我们在OpenAI gym领域对MUJoCo和Atari任务进行了评估,实验结果表明,NECSA比当前先进的事件控制基于事件控制方法实现更高的样本效率。我们的数据和代码可在项目网站上共享。
URL
https://arxiv.org/abs/2301.11490