Paper Reading AI Learner

Chain of Thoughtlessness: An Analysis of CoT in Planning

2024-05-08 02:48:28
Kaya Stechly, Karthik Valmeekam, Subbarao Kambhampati

Abstract

Large language model (LLM) performance on reasoning problems typically does not generalize out of distribution. Previous work has claimed that this can be mitigated by modifying prompts to include examples with chains of thought--demonstrations of solution procedures--with the intuition that it is possible to in-context teach an LLM an algorithm for solving the problem. This paper presents a case study of chain of thought on problems from Blocksworld, a classical planning domain, and examine the performance of two state-of-the-art LLMs across two axes: generality of examples given in prompt, and complexity of problems queried with each prompt. While our problems are very simple, we only find meaningful performance improvements from chain of thought prompts when those prompts are exceedingly specific to their problem class, and that those improvements quickly deteriorate as the size n of the query-specified stack grows past the size of stacks shown in the examples. Our results hint that, contrary to previous claims in the literature, CoT's performance improvements do not stem from the model learning general algorithmic procedures via demonstrations and depend on carefully engineering highly problem specific prompts. This spotlights drawbacks of chain of thought, especially because of the sharp tradeoff between possible performance gains and the amount of human labor necessary to generate examples with correct reasoning traces.

Abstract (translated)

大语言模型(LLM)在推理问题上的表现通常不会泛化到分布之外。之前的工作声称,通过修改提示包括一系列思考过程的示例--解决方案的演示,可以缓解这一问题。本文以 Blocksworld 问题为例,探讨了两种最先进的 LLM 在两个轴上的表现:提示中给出的示例的普遍性,以及每个提示解决问题的复杂性。虽然我们的问题非常简单,但仅当那些提示非常具体到问题类别时,我们才发现了有意义的表现改进。而且,随着查询指定栈的大小 n 超过示例中栈的大小,这些改进会迅速恶化。我们的结果暗示,与文献中之前提出的观点相反,CoT 的性能改进并非通过演示和学习通用算法程序来实现,而是依赖于仔细工程高度问题特定的提示。这一研究突出了思考过程的不足之处,特别是因为其高性价比的性能提升与生成正确推理痕迹所需的人力劳动之间的尖锐权衡。

URL

https://arxiv.org/abs/2405.04776

PDF

https://arxiv.org/pdf/2405.04776.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot