Abstract
Large language models (LLMs) have shown promise for automatic summarization but the reasons behind their successes are poorly understood. By conducting a human evaluation on ten LLMs across different pretraining methods, prompts, and model scales, we make two important observations. First, we find instruction tuning, and not model size, is the key to the LLM's zero-shot summarization capability. Second, existing studies have been limited by low-quality references, leading to underestimates of human performance and lower few-shot and finetuning performance. To better evaluate LLMs, we perform human evaluation over high-quality summaries we collect from freelance writers. Despite major stylistic differences such as the amount of paraphrasing, we find that LMM summaries are judged to be on par with human written summaries.
Abstract (translated)
大型语言模型(LLMs)在自动摘要方面表现出了潜力,但其背后的原因却难以理解。通过对人类对不同预训练方法、提示和模型大小的十只LLMs进行人类评估,我们做出了两个重要观察。第一,我们发现指令调整,而不是模型大小,是LLM的零次元摘要能力的关键。第二,现有的研究受到了低质量参考的限制,导致低估了人类表现和低几次元和微调性能。为了更好地评估LLMs,我们从自由撰稿人收集的高质量摘要中进行人类评估。尽管存在主要风格差异,如改写量,但我们发现LMM摘要与人类写作摘要相当。
URL
https://arxiv.org/abs/2301.13848