Abstract
In complex environments with large discrete action spaces, effective decision-making is critical in reinforcement learning (RL). Despite the widespread use of value-based RL approaches like Q-learning, they come with a computational burden, necessitating the maximization of a value function over all actions in each iteration. This burden becomes particularly challenging when addressing large-scale problems and using deep neural networks as function approximators. In this paper, we present stochastic value-based RL approaches which, in each iteration, as opposed to optimizing over the entire set of $n$ actions, only consider a variable stochastic set of a sublinear number of actions, possibly as small as $\mathcal{O}(\log(n))$. The presented stochastic value-based RL methods include, among others, Stochastic Q-learning, StochDQN, and StochDDQN, all of which integrate this stochastic approach for both value-function updates and action selection. The theoretical convergence of Stochastic Q-learning is established, while an analysis of stochastic maximization is provided. Moreover, through empirical validation, we illustrate that the various proposed approaches outperform the baseline methods across diverse environments, including different control problems, achieving near-optimal average returns in significantly reduced time.
Abstract (translated)
在具有较大离散动作空间复杂环境的强化学习(RL)中,有效的决策非常重要。尽管基于价值的RL方法(如Q-learning)在实践中得到了广泛应用,但它们带来了计算负担,需要通过每个迭代最大化价值函数来解决。这个负担在处理大规模问题和使用深度神经网络作为函数近似的函数时变得尤为困难。在本文中,我们提出了随机价值基于RL的方法,每个迭代周期内,除了优化整个$n$个动作集合外,只考虑一个随机子线性数量的动作,可能大小为$\mathcal{O}(\log(n))$。所提出的随机价值基于RL方法包括,例如,Stochastic Q-learning,StochDQN和StochDDQN,它们都集成了这个随机方法 both value-function updates and action selection. The theoretical convergence of Stochastic Q-learning is established, while an analysis of stochastic maximization is provided. Moreover, through empirical validation, we demonstrate that the various proposed approaches outperform the baseline methods across diverse environments, including different control problems, achieving near-optimal average returns in significantly reduced time.
URL
https://arxiv.org/abs/2405.10310