Abstract
In traditional statistical learning, data points are usually assumed to be independently and identically distributed (i.i.d.) following an unknown probability distribution. This paper presents a contrasting viewpoint, perceiving data points as interconnected and employing a Markov reward process (MRP) for data modeling. We reformulate the typical supervised learning as an on-policy policy evaluation problem within reinforcement learning (RL), introducing a generalized temporal difference (TD) learning algorithm as a resolution. Theoretically, our analysis draws connections between the solutions of linear TD learning and ordinary least squares (OLS). We also show that under specific conditions, particularly when noises are correlated, the TD's solution proves to be a more effective estimator than OLS. Furthermore, we establish the convergence of our generalized TD algorithms under linear function approximation. Empirical studies verify our theoretical results, examine the vital design of our TD algorithm and show practical utility across various datasets, encompassing tasks such as regression and image classification with deep learning.
Abstract (translated)
在传统统计学习过程中,数据点通常被假定服从一个未知的概率分布,且为独立且等距分布(i.i.d.)。本文提出了一种不同的观点,将数据点视为相互连接的,并采用马尔可夫奖励过程(MRP)进行数据建模。我们将典型的监督学习视为强化学习(RL)中的策略评估问题,并引入了一个泛化时间差(TD)学习算法作为解决方案。从理论上讲,我们的分析将线性TD学习和普通最小二乘(OLS)的解决方案联系起来。我们还证明了在特定条件下,特别是噪声相关的情况下,TD的解决方案证明比OLS更有效。此外,我们还建立了我们的泛化TD算法的收敛性。实证研究证实了我们的理论结果,检查了TD算法的关键设计,并表明其在各种数据集上的实际应用具有价值,包括使用深度学习进行回归和图像分类等任务。
URL
https://arxiv.org/abs/2404.15518