Abstract
The cooperative driving technology of Connected and Autonomous Vehicles (CAVs) is crucial for improving the efficiency and safety of transportation systems. Learning-based methods, such as Multi-Agent Reinforcement Learning (MARL), have demonstrated strong capabilities in cooperative decision-making tasks. However, existing MARL approaches still face challenges in terms of learning efficiency and performance. In recent years, Large Language Models (LLMs) have rapidly advanced and shown remarkable abilities in various sequential decision-making tasks. To enhance the learning capabilities of cooperative agents while ensuring decision-making efficiency and cost-effectiveness, we propose LDPD, a language-driven policy distillation method for guiding MARL exploration. In this framework, a teacher agent based on LLM trains smaller student agents to achieve cooperative decision-making through its own decision-making demonstrations. The teacher agent enhances the observation information of CAVs and utilizes LLMs to perform complex cooperative decision-making reasoning, which also leverages carefully designed decision-making tools to achieve expert-level decisions, providing high-quality teaching experiences. The student agent then refines the teacher's prior knowledge into its own model through gradient policy updates. The experiments demonstrate that the students can rapidly improve their capabilities with minimal guidance from the teacher and eventually surpass the teacher's performance. Extensive experiments show that our approach demonstrates better performance and learning efficiency compared to baseline methods.
Abstract (translated)
连接与自动驾驶车辆(CAVs)的协同驾驶技术对于提高交通系统的效率和安全性至关重要。基于学习的方法,如多智能体强化学习(MARL),在协作决策任务中表现出强大的能力。然而,现有的MARL方法在学习效率和性能方面仍面临挑战。近年来,大型语言模型(LLMs)迅速发展,在各种顺序决策任务中展示了卓越的能力。为了增强协同代理的学习能力,并确保决策的高效性和成本效益,我们提出了LDPD(Language-Driven Policy Distillation),一种基于语言驱动策略蒸馏的方法来指导MARL探索。在这个框架中,一个基于LLM的教师代理通过自己的决策演示训练较小的学生代理以实现协作决策。教师代理增强了CAVs的观测信息,并利用LLMs进行复杂的协同决策推理,同时借助精心设计的决策工具实现专家级别的决策,提供高质量的教学体验。学生代理随后通过梯度策略更新将教师的先验知识提炼到其自身模型中。实验表明,在教师的最少指导下,学生能够快速提升能力并最终超越教师的表现。广泛的实验结果显示,我们的方法在性能和学习效率上优于基线方法。
URL
https://arxiv.org/abs/2410.24152