Abstract
Recent work has shown that deep neural networks are capable of approximating both value functions and policies in reinforcement learning domains featuring continuous state and action spaces. However, to the best of our knowledge no previous work has succeeded at using deep neural networks in structured (parameterized) continuous action spaces. To fill this gap, this paper focuses on learning within the domain of simulated RoboCup soccer, which features a small set of discrete action types, each of which is parameterized with continuous variables. The best learned agent can score goals more reliably than the 2012 RoboCup champion agent. As such, this paper represents a successful extension of deep reinforcement learning to the class of parameterized action space MDPs.
Abstract (translated)
近年来,研究表明,深度神经网络在具有连续状态和动作空间的强化学习领域中可以逼近价值函数和策略。然而,据我们所知,在具有结构化(参数化)连续动作空间的领域中,还没有前人成功地将深度神经网络应用于其中。为了填补这一空白,本文将专注于在模拟RoboCup足球领域中学习,该领域具有较小的离散动作类型,并且每个动作类型都由连续变量进行参数化。通过训练,可以获得比2012 RoboCup冠军代理商更可靠的得分能力。因此,本文代表了对参数化动作空间MDPs的深度强化学习的成功扩展。
URL
https://arxiv.org/abs/1511.04143