Abstract
Large language model (LLM) performance on reasoning problems typically does not generalize out of distribution. Previous work has claimed that this can be mitigated by modifying prompts to include examples with chains of thought--demonstrations of solution procedures--with the intuition that it is possible to in-context teach an LLM an algorithm for solving the problem. This paper presents a case study of chain of thought on problems from Blocksworld, a classical planning domain, and examine the performance of two state-of-the-art LLMs across two axes: generality of examples given in prompt, and complexity of problems queried with each prompt. While our problems are very simple, we only find meaningful performance improvements from chain of thought prompts when those prompts are exceedingly specific to their problem class, and that those improvements quickly deteriorate as the size n of the query-specified stack grows past the size of stacks shown in the examples. Our results hint that, contrary to previous claims in the literature, CoT's performance improvements do not stem from the model learning general algorithmic procedures via demonstrations and depend on carefully engineering highly problem specific prompts. This spotlights drawbacks of chain of thought, especially because of the sharp tradeoff between possible performance gains and the amount of human labor necessary to generate examples with correct reasoning traces.
Abstract (translated)
大语言模型(LLM)在推理问题上的表现通常不会泛化到分布之外。之前的工作声称,通过修改提示包括一系列思考过程的示例--解决方案的演示,可以缓解这一问题。本文以 Blocksworld 问题为例,探讨了两种最先进的 LLM 在两个轴上的表现:提示中给出的示例的普遍性,以及每个提示解决问题的复杂性。虽然我们的问题非常简单,但仅当那些提示非常具体到问题类别时,我们才发现了有意义的表现改进。而且,随着查询指定栈的大小 n 超过示例中栈的大小,这些改进会迅速恶化。我们的结果暗示,与文献中之前提出的观点相反,CoT 的性能改进并非通过演示和学习通用算法程序来实现,而是依赖于仔细工程高度问题特定的提示。这一研究突出了思考过程的不足之处,特别是因为其高性价比的性能提升与生成正确推理痕迹所需的人力劳动之间的尖锐权衡。
URL
https://arxiv.org/abs/2405.04776