Abstract
With the widespread application of Large Language Models (LLMs) in various tasks, the mainstream LLM platforms generate massive user-model interactions daily. In order to efficiently analyze the performance of models and diagnose failures in their answers, it is essential to develop an automated framework to systematically categorize and attribute errors. However, existing evaluation models lack error attribution capability. In this work, we establish a comprehensive Misattribution Framework with 6 primary and 15 secondary categories to facilitate in-depth analysis. Based on this framework, we present AttriData, a dataset specifically designed for error attribution, encompassing misattribution, along with the corresponding scores and feedback. We also propose MisAttributionLLM, a fine-tuned model on AttriData, which is the first general-purpose judge model capable of simultaneously generating score, misattribution, and feedback. Extensive experiments and analyses are conducted to confirm the effectiveness and robustness of our proposed method.
Abstract (translated)
随着大型语言模型(LLMs)在各种任务中的广泛应用,主流的LLM平台每天都会生成大量的用户与模型之间的交互。为了有效地分析模型性能并诊断其回答中的错误,开发一个自动化框架来系统地分类和归因于这些错误至关重要。然而,现有的评估模型缺乏错误归因的能力。在这项工作中,我们建立了一个全面的误归类框架,包括6个主要类别和15个次要类别,以促进深入分析。基于这一框架,我们提出了AttriData数据集,这是一个专门用于错误归因的数据集,涵盖了误归类以及相应的评分和反馈。我们还提出了一种在AttriData上微调的模型——MisAttributionLLM,这是第一个能够同时生成分数、误归类和反馈的一般用途判别模型。通过广泛的实验和分析,证实了我们所提出的这种方法的有效性和鲁棒性。
URL
https://arxiv.org/abs/2507.08459