Abstract
Q-learning methods are widely used in robot path planning but often face challenges of inefficient search and slow convergence. We propose an Improved Q-learning (IQL) framework that enhances standard Q-learning in two significant ways. First, we introduce the Path Adaptive Collaborative Optimization (PACO) algorithm to optimize Q-table initialization, providing better initial estimates and accelerating learning. Second, we incorporate a Utility-Controlled Heuristic (UCH) mechanism with dynamically tuned parameters to optimize the reward function, enhancing the algorithm's accuracy and effectiveness in path-planning tasks. Extensive experiments in three different raster grid environments validate the superior performance of our IQL framework. The results demonstrate that our IQL algorithm outperforms existing methods, including FIQL, PP-QL-based CPP, DFQL, and QMABC algorithms, in terms of path-planning capabilities.
Abstract (translated)
Q-learning方法在机器人路径规划中被广泛使用,但常常面临搜索效率低和收敛速度慢的挑战。我们提出了一种改进型Q学习(IQL)框架,在两个重要方面提升了标准Q学习的方法。首先,我们引入了路径自适应协同优化(PACO)算法来优化Q表的初始化,提供更好的初始估计值并加速学习过程。其次,我们整合了一个动态调整参数的效用控制启发式(UCH)机制以优化奖励函数,从而提高算法在路径规划任务中的准确性和有效性。 我们在三种不同的栅格环境中进行了广泛的实验,验证了我们的IQL框架的优越性能。结果表明,在路径规划能力方面,我们的IQL算法优于现有方法,包括FIQL、基于PP-QL的CPP、DFQL和QMABC算法。
URL
https://arxiv.org/abs/2501.05411