Abstract
We investigate whether internal activations in language models can be used to detect arithmetic errors. Starting with a controlled setting of 3-digit addition, we show that simple probes can accurately decode both the model's predicted output and the correct answer from hidden states, regardless of whether the model's output is correct. Building on this, we train lightweight error detectors that predict model correctness with over 90% accuracy. We then extend our analysis to structured chain-of-thought traces on addition-only GSM8K problems and find that probes trained on simple arithmetic generalize well to this more complex setting, revealing consistent internal representations. Finally, we demonstrate that these probes can guide selective re-prompting of erroneous reasoning steps, improving task accuracy with minimal disruption to correct outputs. Our findings suggest that arithmetic errors can be anticipated from internal activations alone, and that simple probes offer a viable path toward lightweight model self-correction.
Abstract (translated)
我们研究了语言模型内部激活是否可以用于检测算术错误。从一个受控的三位数加法场景开始,我们展示了简单的探测器可以从隐藏状态准确解码模型预测输出和正确答案,无论该输出是否正确。在此基础上,我们训练出了轻量级的错误检测器,能够以超过90%的准确性预测模型的正确性。然后,我们将分析扩展到仅包含加法运算的GSM8K问题的结构化思维链,并发现针对简单算术训练的探测器可以很好地泛化到这个更复杂的设置中,揭示了内部表示的一致性。最后,我们展示了这些探测器可以指导选择性的重新提示错误推理步骤,在不干扰正确输出的情况下提高任务准确性。 我们的研究结果表明,仅从内部激活即可预测算术错误,并且简单的探测器提供了一条可行的路径来实现轻量级模型自我校正。
URL
https://arxiv.org/abs/2507.12379