Abstract
Human language production exhibits remarkable richness and variation, reflecting diverse communication styles and intents. However, this variation is often overlooked in summarization evaluation. While having multiple reference summaries is known to improve correlation with human judgments, the impact of using different reference sets on reference-based metrics has not been systematically investigated. This work examines the sensitivity of widely used reference-based metrics in relation to the choice of reference sets, analyzing three diverse multi-reference summarization datasets: SummEval, GUMSum, and DUC2004. We demonstrate that many popular metrics exhibit significant instability. This instability is particularly concerning for n-gram-based metrics like ROUGE, where model rankings vary depending on the reference sets, undermining the reliability of model comparisons. We also collect human judgments on LLM outputs for genre-diverse data and examine their correlation with metrics to supplement existing findings beyond newswire summaries, finding weak-to-no correlation. Taken together, we recommend incorporating reference set variation into summarization evaluation to enhance consistency alongside correlation with human judgments, especially when evaluating LLMs.
Abstract (translated)
人类语言生成表现出丰富性和多样性,反映了各种不同的沟通风格和意图。然而,在摘要评估中,这种变异性往往被忽视了。尽管已知使用多个参考摘要可以提高与人工判断的相关性,但不同参考集对基于参考的指标的影响尚未进行系统的调查研究。这项工作考察了广泛使用的基于参考的度量标准在选择参考集合时的敏感性,并分析了三个多样化的多参考摘要数据集:SummEval、GUMSum 和 DUC2004。我们展示了许多流行指标表现出显著的不稳定性。这种不稳定性尤其令人担忧,特别是在基于 n-gram 的指标(如 ROUGE)中,模型排名会根据参考集合的不同而变化,从而损害了对模型比较的可靠性。此外,我们在各类别不同的数据上收集了大型语言模型输出的人工判断,并考察这些人工判断与指标的相关性,以补充现有研究并超越新闻摘要的研究范围,发现相关性很弱甚至没有相关性。综合来看,我们建议在总结评估中纳入参考集的变化,以增强一致性以及与人类判断的相关性,尤其是在评估大型语言模型时。
URL
https://arxiv.org/abs/2506.14335