Abstract
Mechanistic Interpretability (MI) has emerged as a vital approach to demystify the opaque decision-making of Large Language Models (LLMs). However, existing reviews primarily treat MI as an observational science, summarizing analytical insights while lacking a systematic framework for actionable intervention. To bridge this gap, we present a practical survey structured around the pipeline: "Locate, Steer, and Improve." We formally categorize Localizing (diagnosis) and Steering (intervention) methods based on specific Interpretable Objects to establish a rigorous intervention protocol. Furthermore, we demonstrate how this framework enables tangible improvements in Alignment, Capability, and Efficiency, effectively operationalizing MI as an actionable methodology for model optimization. The curated paper list of this work is available at this https URL.
Abstract (translated)
机制解释性(MI)已成为揭开大型语言模型(LLMs)不透明决策过程的重要方法。然而,现有的评论主要将MI视为一种观察科学,虽然总结了分析见解但缺乏一个可操作干预的系统框架。为了填补这一空白,我们提出了一项基于“定位、引导和改进”管道的实用调查。我们将本地化(诊断)和指导(干预)的方法正式分类为特定的解释性对象,以建立严格的干预协议。此外,我们展示了该框架如何能够实现在对齐、能力和效率方面的实际改进,有效地将MI作为模型优化的操作方法。这项工作的精选论文列表可在[此处](https://example.com/paperlist)获得。 (请注意,“此处”中的链接需要替换为真实的URL地址)
URL
https://arxiv.org/abs/2601.14004