Paper Reading AI Learner

Hopping Too Late: Exploring the Limitations of Large Language Models on Multi-Hop Queries

2024-06-18 16:44:13
Eden Biran, Daniela Gottesman, Sohee Yang, Mor Geva, Amir Globerson

Abstract

Large language models (LLMs) can solve complex multi-step problems, but little is known about how these computations are implemented internally. Motivated by this, we study how LLMs answer multi-hop queries such as "The spouse of the performer of Imagine is". These queries require two information extraction steps: a latent one for resolving the first hop ("the performer of Imagine") into the bridge entity (John Lennon), and one for resolving the second hop ("the spouse of John Lennon") into the target entity (Yoko Ono). Understanding how the latent step is computed internally is key to understanding the overall computation. By carefully analyzing the internal computations of transformer-based LLMs, we discover that the bridge entity is resolved in the early layers of the model. Then, only after this resolution, the two-hop query is solved in the later layers. Because the second hop commences in later layers, there could be cases where these layers no longer encode the necessary knowledge for correctly predicting the answer. Motivated by this, we propose a novel "back-patching" analysis method whereby a hidden representation from a later layer is patched back to an earlier layer. We find that in up to 57% of previously incorrect cases there exists a back-patch that results in the correct generation of the answer, showing that the later layers indeed sometimes lack the needed functionality. Overall our methods and findings open further opportunities for understanding and improving latent reasoning in transformer-based LLMs.

Abstract (translated)

大语言模型(LLMs)可以解决复杂的跨步问题,但关于这些计算的内部实现,目前还知之甚少。为了回答类似的问题,如“Imagine的演员的配偶是谁?”这些问题需要两个信息提取步骤:一个用于将第一跳(演员Imagine)转换为桥实体(John Lennon),另一个用于将第二跳(John Lennon的配偶)转换为目标实体(Yoko Ono)。理解内部步骤如何计算是理解整个计算的关键。通过仔细分析基于Transformer的LLM的内部计算,我们发现,桥梁实体在模型的早期层中就已经解决。然后,在解决这个问题之后,在后面的层中才解决两个跳。因为第二跳在后面层开始,所以有可能这些层无法正确预测答案。为了回答这个问题,我们提出了一个新的“反向修复”分析方法,即从后面层的一个隐藏表示器中反向修复到前面层。我们发现,在多达57% previously incorrect cases中,存在反向修复,从而正确地生成了答案,这表明后面层确实有时缺乏必要的功能。总的来说,我们的方法和发现为深入理解和改进基于Transformer的LLM的潜在推理提供了更多的机会。

URL

https://arxiv.org/abs/2406.12775

PDF

https://arxiv.org/pdf/2406.12775.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot