Abstract
Recent advancements in large language models (LLMs) have showcased their exceptional abilities across various tasks, such as code generation, problem-solving and reasoning. Existing benchmarks evaluate tasks in isolation, yet the extent to which LLMs can understand prose-style tasks, identify the underlying problems, and then generate appropriate code solutions is still unexplored. Addressing this gap, we introduce PECC, a novel benchmark derived from Advent Of Code (AoC) challenges and Project Euler, including 2396 problems. Unlike conventional benchmarks, PECC requires LLMs to interpret narrative-embedded problems, extract requirements, and generate executable code. A key feature of our dataset is the complexity added by natural language prompting in chat-based evaluations, mirroring real-world instruction ambiguities. Results show varying model performance between narrative and neutral problems, with specific challenges in the Euler math-based subset with GPT-3.5-Turbo passing 50% of the AoC challenges and only 8% on the Euler problems. By probing the limits of LLMs' capabilities, our benchmark provides a framework to monitor and assess the subsequent progress of LLMs as a universal problem solver.
Abstract (translated)
近年来,大型语言模型(LLMs)在各种任务上的表现已经突出了其惊人的能力,例如代码生成、解决问题和推理。现有的基准测试通常是孤立地进行评估的,然而LLMs在理解散文式任务、识别潜在问题,然后生成适当的代码解决方案方面仍然是一个尚未探索的领域。为填补这一空白,我们引入了PECC基准,这是从Advent of Code(AoC)挑战和Project Euler项目派生而来的新基准,包括2396个问题。与传统基准不同,PECC要求LLMs解释嵌入式叙述问题的自然语言提示,提取需求,并生成可执行代码。我们数据集中的关键特征是自然语言提示在聊天评估中增加了复杂性,反映了真实世界指令的不确定性。结果表明,在叙述和中立问题上的模型表现有所不同,特别是在基于GPT-3.5-Turbo的Euler数学问题中,GPT-3.5-Turbo通过了50%的AoC挑战,只对Euler问题产生了8%的解决方案。通过探索LLMs能力的极限,我们的基准为监控和评估LLMs作为一个通用问题解决者后的进步提供了框架。
URL
https://arxiv.org/abs/2404.18766