Abstract
Large Language Models (LLMs) have shown great success as high-level planners for zero-shot game-playing agents. However, these agents are primarily evaluated on Minecraft, where long-term planning is relatively straightforward. In contrast, agents tested in dynamic robot environments face limitations due to simplistic environments with only a few objects and interactions. To fill this gap in the literature, we present NetPlay, the first LLM-powered zero-shot agent for the challenging roguelike NetHack. NetHack is a particularly challenging environment due to its diverse set of items and monsters, complex interactions, and many ways to die. NetPlay uses an architecture designed for dynamic robot environments, modified for NetHack. Like previous approaches, it prompts the LLM to choose from predefined skills and tracks past interactions to enhance decision-making. Given NetHack's unpredictable nature, NetPlay detects important game events to interrupt running skills, enabling it to react to unforeseen circumstances. While NetPlay demonstrates considerable flexibility and proficiency in interacting with NetHack's mechanics, it struggles with ambiguous task descriptions and a lack of explicit feedback. Our findings demonstrate that NetPlay performs best with detailed context information, indicating the necessity for dynamic methods in supplying context information for complex games such as NetHack.
Abstract (translated)
大语言模型(LLMs)在零击游戏智能体中表现出巨大的成功。然而,这些智能体主要在Minecraft上进行评估,因为长期规划在Minecraft这样的简单环境中相对简单。相比之下,在动态机器人环境中测试的智能体由于简化的环境(只有几件物品和交互)而面临限制。为了填补这一文献空白,我们提出了NetPlay,这是第一个LLM驱动的零击NetHack游戏智能体。NetHack因为其多样化的物品和怪物、复杂的交互以及多种死亡方式而尤为具有挑战性。NetPlay使用专为动态机器人环境设计的架构,并针对NetHack进行了修改。与之前的方法类似,它提示LLM从预定义技能中选择,并跟踪之前的交互以增强决策。鉴于NetHack的不确定性,NetPlay通过检测重要游戏事件来打断运行技能,使其能够应对未预料到的情况。虽然NetPlay在处理NetHack的机制方面表现出很大的灵活性和熟练程度,但它仍然难以处理模糊的任务描述和缺乏明确反馈的问题。我们的研究结果表明,NetPlay在具有详细上下文信息的情况下表现最佳,这表明为复杂游戏(如NetHack)提供动态方法是必要的。
URL
https://arxiv.org/abs/2403.00690