Abstract
Large Language Models (LLMs) demonstrate an impressive capacity to recall a vast range of common factual knowledge information. However, unravelling the underlying reasoning of LLMs and explaining their internal mechanisms of exploiting this factual knowledge remain active areas of investigation. Our work analyzes the factual knowledge encoded in the latent representation of LLMs when prompted to assess the truthfulness of factual claims. We propose an end-to-end framework that jointly decodes the factual knowledge embedded in the latent space of LLMs from a vector space to a set of ground predicates and represents its evolution across the layers using a temporal knowledge graph. Our framework relies on the technique of activation patching which intervenes in the inference computation of a model by dynamically altering its latent representations. Consequently, we neither rely on external models nor training processes. We showcase our framework with local and global interpretability analyses using two claim verification datasets: FEVER and CLIMATE-FEVER. The local interpretability analysis exposes different latent errors from representation to multi-hop reasoning errors. On the other hand, the global analysis uncovered patterns in the underlying evolution of the model's factual knowledge (e.g., store-and-seek factual information). By enabling graph-based analyses of the latent representations, this work represents a step towards the mechanistic interpretability of LLMs.
Abstract (translated)
大语言模型(LLMs)展示了其在回忆广泛共同事实知识信息方面令人印象深刻的能力。然而,揭示LLMs内部利用这种事实知识的推理原理以及解释它们如何利用事实知识仍然是一个活跃的研究领域。我们的工作分析了LLM潜在表示中编码的事实知识。我们提出了一个端到端的框架,该框架将LLM潜在空间中的事实知识从向量空间解码到一组地面谓词,并使用时间知识图表示其随层进化。我们的框架依赖于激活补丁技术,该技术通过动态改变其潜在表示影响模型的推理计算。因此,我们既不依赖外部模型,也不依赖训练过程。我们使用两个声称验证数据集:FEVER和CLIMATE-FEVER)进行局部和全局可解释性分析。局部可解释性分析揭示了从表示到多级推理的潜在错误。另一方面,全局分析发现了模型事实知识底层演变的模式(例如,存储和查找事实信息)。通过使基于图的LLM潜在表示分析成为可能,这项工作代表了一个向LLM的机械可解释性迈进的步骤。
URL
https://arxiv.org/abs/2404.03623