Abstract
We present a Monte-Carlo simulation algorithm for real-time policy improvement of an adaptive controller. In the Monte-Carlo simulation, the long-term expected reward of each possible action is statistically measured, using the initial policy to make decisions in each step of the simulation. The action maximizing the measured expected reward is then taken, resulting in an improved policy. Our algorithm is easily parallelizable and has been implemented on the IBM SP1 and SP2 parallel-RISC supercomputers. We have obtained promising initial results in applying this algorithm to the domain of backgammon. Results are reported for a wide variety of initial policies, ranging from a random policy to TD-Gammon, an extremely strong multi-layer neural network. In each case, the Monte-Carlo algorithm gives a substantial reduction, by as much as a factor of 5 or more, in the error rate of the base players. The algorithm is also potentially useful in many other adaptive control applications in which it is possible to simulate the environment.
Abstract (translated)
我们提出了一种用于自适应控制器实时策略改进的蒙特卡洛模拟算法。在该蒙特卡洛模拟中,统计测量每一种可能行动的长期预期奖励,使用初始政策来做出每一模拟步骤中的决策。然后选择能够最大化所测得的预期奖励的动作,从而形成改进后的策略。我们的算法易于并行化,并已在IBM SP1和SP2并行RISC超级计算机上实现。在国际跳棋领域应用此算法后取得了令人鼓舞的初步成果。我们对多种不同的初始政策进行了结果报告,范围从随机政策到TD-Gammon(一种极为强大的多层神经网络)。在这每种情况下,蒙特卡洛算法都显著降低了基础玩家的错误率,最多可减少5倍或更多。该算法在许多其他可以模拟环境的自适应控制应用中也具有潜在的实用性。
URL
https://arxiv.org/abs/2501.05407