Abstract
Despite the great development of document summarization techniques nowadays, factual inconsistencies between the generated summaries and the original text still occur from time to time. This paper proposes a prefix-tuning-based approach that uses a set of trainable continuous prefix prompt together with discrete prompts to aid model generation, which makes a significant impact on both CNN/Daily Mail and XSum summaries generated using GPT-2. The improvements on fact preservation in the generated summaries indicates the effectiveness of adopting this prefix-tuning-based method in knowledge-enhanced document summarization, and also shows a great potential on other natural language processing tasks.
Abstract (translated)
尽管文档摘要技术已经取得了巨大的发展,但生成摘要和原始文本之间的事实一致性仍然不时发生。本文提出了一种基于前缀调整的方法,使用可训练连续前缀提示和离散提示来帮助模型生成,这在CNN/ Daily Mail和使用GPT-2生成的XSum摘要中都产生了重大影响。生成摘要中事实保留方面的改进表明在知识增强文档摘要中采用这种基于前缀调整的方法的有效性,同时也展示了在其他自然语言处理任务中的巨大潜力。
URL
https://arxiv.org/abs/2301.11719