Abstract
Categorical Distributional Reinforcement Learning (CDRL) has demonstrated superior sample efficiency in learning complex tasks compared to conventional Reinforcement Learning (RL) approaches. However, the practical application of CDRL is encumbered by challenging projection steps, detailed parameter tuning, and domain knowledge. This paper addresses these challenges by introducing a pioneering Continuous Distributional Model-Free RL algorithm tailored for continuous action spaces. The proposed algorithm simplifies the implementation of distributional RL, adopting an actor-critic architecture wherein the critic outputs a continuous probability distribution. Additionally, we propose an ensemble of multiple critics fused through a Kalman fusion mechanism to mitigate overestimation bias. Through a series of experiments, we validate that our proposed method is easy to train and serves as a sample-efficient solution for executing complex continuous-control tasks.
Abstract (translated)
分类分布强化学习(CDRL)在处理复杂任务时具有比传统强化学习(RL)方法更高的样本效率。然而,CDRL的实践应用受到具有挑战性的投影阶段、详细的参数调整和领域知识的限制。本文通过引入一个首创的连续分布模型无关RL算法来解决这些挑战。所提出的算法简化了分布强化学习的实现,采用actor-critic架构,其中批评器输出一个连续概率分布。此外,我们提出了一种通过Kalman融合机制将多个批评器聚类的 ensemble。通过一系列实验验证,我们证实了我们的方法容易训练,并且可作为执行复杂连续控制任务的样本效率解决方案。
URL
https://arxiv.org/abs/2405.02576