Abstract
Effective governance and steering of behavior in complex multi-agent systems (MAS) are essential for managing system-wide outcomes, particularly in environments where interactions are structured by dynamic networks. In many applications, the goal is to promote pro-social behavior among agents, where network structure plays a pivotal role in shaping these interactions. This paper introduces a Hierarchical Graph Reinforcement Learning (HGRL) framework that governs such systems through targeted interventions in the network structure. Operating within the constraints of limited managerial authority, the HGRL framework demonstrates superior performance across a range of environmental conditions, outperforming established baseline methods. Our findings highlight the critical influence of agent-to-agent learning (social learning) on system behavior: under low social learning, the HGRL manager preserves cooperation, forming robust core-periphery networks dominated by cooperators. In contrast, high social learning accelerates defection, leading to sparser, chain-like networks. Additionally, the study underscores the importance of the system manager's authority level in preventing system-wide failures, such as agent rebellion or collapse, positioning HGRL as a powerful tool for dynamic network-based governance.
Abstract (translated)
有效的治理和引导行为在复杂的多智能体系统(MAS)中至关重要,特别是在那些由动态网络结构决定相互作用的环境中。许多应用场景的目标是促进智能体之间的亲社会行为,在此过程中,网络结构起着关键的作用。本文介绍了一个分层图强化学习(HGRL)框架,该框架通过在网络结构中的有针对性干预来管理这样的系统。在有限的管理权限内操作,HGRL框架在各种环境条件下表现出色,超越了现有的基准方法。我们的研究发现强调了智能体间学习(社会学习)对系统行为的关键影响:在低水平的社会学习下,HGRL管理者能够维持合作,形成以合作者为主的坚固核心-边缘网络;相比之下,在高水平社会学习的情况下,则加速了背叛行为的出现,导致形成了稀疏的链状网络。此外,这项研究还强调了系统管理者的权限级别对于防止系统性失败(如智能体叛乱或崩溃)的重要性,这使得HGRL成为一种基于动态网络治理的强大工具。
URL
https://arxiv.org/abs/2410.23396