Abstract
Off-policy learning, referring to the procedure of policy optimization with access only to logged feedback data, has shown importance in various real-world applications, such as search engines, recommender systems, and etc. While the ground-truth logging policy, which generates the logged data, is usually unknown, previous work simply takes its estimated value in off-policy learning, ignoring both high bias and high variance resulted from such an estimator, especially on samples with small and inaccurately estimated logging probabilities. In this work, we explicitly model the uncertainty in the estimated logging policy and propose a Uncertainty-aware Inverse Propensity Score estimator (UIPS) for improved off-policy learning. Experiment results on synthetic and three real-world recommendation datasets demonstrate the advantageous sample efficiency of the proposed UIPS estimator against an extensive list of state-of-the-art baselines.
Abstract (translated)
非自主学习(Off-policy learning)是指仅使用记录的反馈数据进行策略优化的过程,在各种实际应用程序中,如搜索引擎、推荐系统和等等,表现出了重要性。虽然生成记录数据的真实的日志决策程序通常未知,但以前的工作只是将其在非自主学习中估计值,而忽视了由这种估计器带来的高偏差和高方差,特别是小且不准确估计日志概率样本的影响。在本文中,我们 explicitly model 估计的日志决策的不确定性,并提出了一种不确定性aware的逆概率权重估计器(UIPS),以改善非自主学习。在模拟和三个真实的推荐数据集上的实验结果证明了提出的UIPS估计器相对于广泛的先进基准线的优越样本效率。
URL
https://arxiv.org/abs/2303.06389