Abstract
Prominent large language models have exhibited human-level performance in many domains, even enabling the derived agents to simulate human and social interactions. While practical works have substantiated the practicability of grounding language agents in sandbox simulation or embodied simulators, current social intelligence benchmarks either stay at the language level or use subjective metrics. In pursuit of a more realistic and objective evaluation, we introduce the Social Tasks in Sandbox Simulation (STSS) benchmark, which assesses language agents \textbf{objectively} at the \textbf{action level} by scrutinizing the goal achievements within the multi-agent simulation. Additionally, we sample conversation scenarios to build a language-level benchmark to provide an economically prudent preliminary evaluation and align with prevailing benchmarks. To gauge the significance of agent architecture, we implement a target-driven planning (TDP) module as an adjunct to the existing agent. Our evaluative findings highlight that the STSS benchmark is challenging for state-of-the-art language agents. Furthermore, it effectively discriminates between distinct language agents, suggesting its usefulness as a benchmark for evaluating both language models and agent architectures.
Abstract (translated)
著名的大的语言模型已经在许多领域表现出与人类相当的表现,甚至使派生的代理能够模拟人类和社会互动。尽管实际工作已经证实将语言代理器 grounded 置于沙盒模拟或 embodied 模拟器上是可行的,但当前的社会智能基准要么停留在语言级别,要么使用主观指标。为了追求更真实和客观的评估,我们引入了 Social Tasks in Sandbox Simulation (STSS) 基准,该基准通过检查多智能体模拟中目标实现来客观评估语言代理器。此外,我们还采样对话场景来建立一个语言级别的基准,为经济上的谨慎初步评估并符合现有的基准。为了衡量代理架构的重要性,我们在现有的代理中实现了一个目标驱动规划(TDP)模块作为附加组件。我们的评估发现强调了 STSS 基准对最先进的语言代理机的挑战。此外,它有效地将不同的语言代理机区分开来,表明它作为评估语言模型和代理架构的基准具有实际价值。
URL
https://arxiv.org/abs/2404.05337