Abstract
This work considers multiple agents traversing a network from a source node to the goal node. The cost to an agent for traveling a link has a private as well as a congestion component. The agent's objective is to find a path to the goal node with minimum overall cost in a decentralized way. We model this as a fully decentralized multi-agent reinforcement learning problem and propose a novel multi-agent congestion cost minimization (MACCM) algorithm. Our MACCM algorithm uses linear function approximations of transition probabilities and the global cost function. In the absence of a central controller and to preserve privacy, agents communicate the cost function parameters to their neighbors via a time-varying communication network. Moreover, each agent maintains its estimate of the global state-action value, which is updated via a multi-agent extended value iteration (MAEVI) sub-routine. We show that our MACCM algorithm achieves a sub-linear regret. The proof requires the convergence of cost function parameters, the MAEVI algorithm, and analysis of the regret bounds induced by the MAEVI triggering condition for each agent. We implement our algorithm on a two node network with multiple links to validate it. We first identify the optimal policy, the optimal number of agents going to the goal node in each period. We observe that the average regret is close to zero for 2 and 3 agents. The optimal policy captures the trade-off between the minimum cost of staying at a node and the congestion cost of going to the goal node. Our work is a generalization of learning the stochastic shortest path problem.
Abstract (translated)
这项工作考虑了多个agent从源节点到目标节点穿越网络的问题。每个agent前往一个link的成本包括私人成本和拥堵成本。agent的目标是以最小总成本的方式找到通往目标节点的路径,我们将其建模为完全分散化的多agent reinforcement learning问题,并提出了一种新的多agent拥堵成本最小化(MACCM)算法。我们的MACCM算法使用线性函数近似过渡概率和全局成本函数。在没有中央控制器和保护隐私的情况下,agent通过时间变化的沟通网络向邻居发送成本函数参数。此外,每个agent维持其对全局状态-行动价值的主观估计,该估计通过多agent扩展的价值迭代(MAEVI)子程序更新。我们证明了我们的MACCM算法实现了 sub-linear 的 regret。证明需要成本函数参数的收敛、MAEVI算法的收敛和分析每个agent的MAEVI触发条件引起的 regret 边界。我们在一个有两个节点的网络上实现了我们的算法来验证它。我们首先确定最优策略,每个时期前往目标节点的最优agent数量。我们观察到,2和3个agent的平均 regret接近于零。最优策略捕捉了最小成本停留在节点和前往目标节点的拥堵成本之间的权衡。我们的工作是学习随机最短路径问题的扩展。
URL
https://arxiv.org/abs/2301.10993