Abstract
Given a dataset of expert demonstrations, inverse reinforcement learning (IRL) aims to recover a reward for which the expert is optimal. This work proposes a model-free algorithm to solve entropy-regularized IRL problem. In particular, we employ a stochastic gradient descent update for the reward and a stochastic soft policy iteration update for the policy. Assuming access to a generative model, we prove that our algorithm is guaranteed to recover a reward for which the expert is $\varepsilon$-optimal using $\mathcal{O}(1/\varepsilon^{2})$ samples of the Markov decision process (MDP). Furthermore, with $\mathcal{O}(1/\varepsilon^{4})$ samples we prove that the optimal policy corresponding to the recovered reward is $\varepsilon$-close to the expert policy in total variation distance.
Abstract (translated)
给定专家演示的数据集,反强化学习(IRL)旨在恢复专家的最优奖励。本文提出了一种无需模型的IRL求解熵正则化问题。具体来说,我们使用随机梯度下降更新来更新奖励,并使用随机软策略迭代更新策略。假设可以使用生成模型,我们证明我们的算法使用马尔可夫决策过程(MDP)的$\mathcal{O}(1/\varepsilon^{2})$个样本可以保证恢复专家的奖励,使得专家最优。此外,我们使用$\mathcal{O}(1/\varepsilon^{4})$个样本来证明,与恢复的奖励相关的最优策略在总方差距离上与专家策略$\varepsilon$-接近。
URL
https://arxiv.org/abs/2403.16829