Abstract
Robust Reinforcement Learning (RRL) is a promising Reinforcement Learning (RL) paradigm aimed at training robust to uncertainty or disturbances models, making them more efficient for real-world applications. Following this paradigm, uncertainty or disturbances are interpreted as actions of a second adversarial agent, and thus, the problem is reduced to seeking the agents' policies robust to any opponent's actions. This paper is the first to propose considering the RRL problems within the positional differential game theory, which helps us to obtain theoretically justified intuition to develop a centralized Q-learning approach. Namely, we prove that under Isaacs's condition (sufficiently general for real-world dynamical systems), the same Q-function can be utilized as an approximate solution of both minimax and maximin Bellman equations. Based on these results, we present the Isaacs Deep Q-Network algorithms and demonstrate their superiority compared to other baseline RRL and Multi-Agent RL algorithms in various environments.
Abstract (translated)
鲁棒强化学习(RRL)是一个有前景的强化学习(RL)范式,旨在训练对不确定或扰动具有鲁棒性的模型,使其在现实应用中更加高效。遵循这一范式,不确定性或扰动被解释为第二个对抗代理的行动,因此问题 reduction为寻求具有对任何对手行动鲁棒的代理策略。本文是第一个考虑在位置微分游戏理论中提出RRL问题的论文,这有助于我们获得理论证明,以开发一种集中式Q学习方法。具体来说,我们证明了在Isaacs的条件下(对于现实世界动态系统足够通用),相同Q函数可以作为最小最大Bellman方程的近似解。基于这些结果,我们提出了Isaacs深度Q网络算法,并在各种环境中证明了它们与其他基线RRL和多代理器RL算法的优越性。
URL
https://arxiv.org/abs/2405.02044