Abstract
We explore the need for more comprehensive and precise evaluation techniques for generative artificial intelligence (GenAI) in text summarization tasks, specifically in the area of opinion summarization. Traditional methods, which leverage automated metrics to compare machine-generated summaries from a collection of opinion pieces, e.g. product reviews, have shown limitations due to the paradigm shift introduced by large language models (LLM). This paper addresses these shortcomings by proposing a novel, fully automated methodology for assessing the factual consistency of such summaries. The method is based on measuring the similarity between the claims in a given summary with those from the original reviews, measuring the coverage and consistency of the generated summary. To do so, we rely on a simple approach to extract factual assessment from texts that we then compare and summarize in a suitable score. We demonstrate that the proposed metric attributes higher scores to similar claims, regardless of whether the claim is negated, paraphrased, or expanded, and that the score has a high correlation to human judgment when compared to state-of-the-art metrics.
Abstract (translated)
我们探讨了在文本摘要任务中,特别是意见总结领域,对生成式人工智能(GenAI)进行更全面和精确评估技术的需求。传统方法利用自动化指标来比较机器生成的摘要与一系列意见文章(如产品评论)之间的差异,但由于大型语言模型(LLM)带来的范式转变,这些方法显示出一定的局限性。本文通过提出一种新的、完全自动化的评估方法来解决这些问题,该方法旨在衡量此类摘要的事实一致性。 这种方法基于测量给定摘要中的主张与其原始评论中声明的相似度,以此来评估生成摘要的覆盖范围和一致性。为此,我们依赖于从文本中提取事实评估的一种简单方法,并将其进行比较以得出合适的评分。研究表明,所提出的指标无论声明是否被否定、改写或扩展,都会对类似陈述赋予更高的分数;并且与现有最先进的指标相比,该得分与人工评判的相关性很高。
URL
https://arxiv.org/abs/2602.08709