Abstract
This research examines the effectiveness of OpenAI's GPT models as independent evaluators of text summaries generated by six transformer-based models from Hugging Face: DistilBART, BERT, ProphetNet, T5, BART, and PEGASUS. We evaluated these summaries based on essential properties of high-quality summary - conciseness, relevance, coherence, and readability - using traditional metrics such as ROUGE and Latent Semantic Analysis (LSA). Uniquely, we also employed GPT not as a summarizer but as an evaluator, allowing it to independently assess summary quality without predefined metrics. Our analysis revealed significant correlations between GPT evaluations and traditional metrics, particularly in assessing relevance and coherence. The results demonstrate GPT's potential as a robust tool for evaluating text summaries, offering insights that complement established metrics and providing a basis for comparative analysis of transformer-based models in natural language processing tasks.
Abstract (translated)
这项研究探讨了OpenAI的GPT模型作为独立评估者评估由Hugging Face开发的六个基于Transformer的模型的文本摘要的有效性:DistilBART,BERT,ProphetNet,T5,BART和PEGASUS。我们根据高质量摘要的基本属性——简洁性,相关性,连贯性和可读性——评估这些摘要。此外,我们还使用GPT作为评估者,使其能够独立评估摘要的质量,而无需预先定义的指标。我们的分析揭示了GPT评估与传统指标之间的显著相关性,特别是在评估相关性和连贯性方面。结果表明,GPT具有作为评价文本摘要的稳健工具的潜力,提供了与现有指标不同的见解,并为自然语言处理任务中基于Transformer模型的比较分析提供了基础。
URL
https://arxiv.org/abs/2405.04053