Abstract
The Adversarial Markov Decision Process (AMDP) is a learning framework that deals with unknown and varying tasks in decision-making applications like robotics and recommendation systems. A major limitation of the AMDP formalism, however, is pessimistic regret analysis results in the sense that although the cost function can change from one episode to the next, the evolution in many settings is not adversarial. To address this, we introduce and study a new variant of AMDP, which aims to minimize regret while utilizing a set of cost predictors. For this setting, we develop a new policy search method that achieves a sublinear optimistic regret with high probability, that is a regret bound which gracefully degrades with the estimation power of the cost predictors. Establishing such optimistic regret bounds is nontrivial given that (i) as we demonstrate, the existing importance-weighted cost estimators cannot establish optimistic bounds, and (ii) the feedback model of AMDP is different (and more realistic) than the existing optimistic online learning works. Our result, in particular, hinges upon developing a novel optimistically biased cost estimator that leverages cost predictors and enables a high-probability regret analysis without imposing restrictive assumptions. We further discuss practical extensions of the proposed scheme and demonstrate its efficacy numerically.
Abstract (translated)
Adversarial Markov Decision Process(AMDP)是一种处理决策应用中未知且变化的任务的学习框架,如机器人技术和推荐系统。然而,AMDP形式的一个主要局限性是悲观的后悔分析结果,这意味着虽然成本函数可以从每一刻变化,但许多环境中的演变不是对抗的。为了解决这个问题,我们引入并研究了一种新的AMDP变体,旨在最小化后悔,同时利用一组成本预测器。对于这个设置,我们开发了一种新的策略搜索方法,实现了具有高概率的非线性乐观后悔,即在估计能力下,后悔的上界。建立这样的乐观后悔上界并非易事,因为(i)正如我们所证明的,现有的重要性加权成本估计器无法建立乐观的上界;(ii)AMDP的反馈模型与现有的乐观在线学习工作不同(更现实)。我们的结果,特别是取决于开发了一个新的具有高概率的乐观 biased 成本估计器,利用成本预测器,从而在没有强制假设的情况下实现高概率后悔分析。我们进一步讨论了所提出的方案的实用扩展,并将其有效性进行了数值证明。
URL
https://arxiv.org/abs/2405.02188