Abstract
The integration of multi-document pre-training objectives into language models has resulted in remarkable improvements in multi-document downstream tasks. In this work, we propose extending this idea by pre-training a generic multi-document model from a novel cross-document question answering pre-training objective. To that end, given a set (or cluster) of topically-related documents, we systematically generate semantically-oriented questions from a salient sentence in one document and challenge the model, during pre-training, to answer these questions while "peeking" into other topically-related documents. In a similar manner, the model is also challenged to recover the sentence from which the question was generated, again while leveraging cross-document information. This novel multi-document QA formulation directs the model to better recover cross-text informational relations, and introduces a natural augmentation that artificially increases the pre-training data. Further, unlike prior multi-document models that focus on either classification or summarization tasks, our pre-training objective formulation enables the model to perform tasks that involve both short text generation (e.g., QA) and long text generation (e.g., summarization). Following this scheme, we pre-train our model -- termed QAmden -- and evaluate its performance across several multi-document tasks, including multi-document QA, summarization, and query-focused summarization, yielding improvements of up to 7%, and significantly outperforms zero-shot GPT-3.5 and GPT-4.
Abstract (translated)
将多文档预训练目标融入语言模型,导致了多文档后续任务的重大改进。在这项工作中,我们建议扩展这个思想,通过预训练一个通用的多文档模型,从一个新的跨文档问答预训练目标开始。为此,给定一组(或簇)相关的文档,我们 systematic 地从一份文档中的一条引人注目的句子中生成语义相关的提问,并在预训练期间挑战模型回答这些问题,同时“窥探”其他相关的文档。类似地,模型也被挑战恢复生成的提问的句子,同时利用跨文档信息。这个新的多文档QA formulation指示模型更好地恢复跨文本信息关系,并引入了一种自然的增强,从而增加了预训练数据。此外,与以前的多文档模型专注于分类或总结任务不同,我们的预训练目标 formulation使模型能够同时涉及短文本生成(如QA)和长文本生成(如总结)的任务。按照这个方案,我们预训练我们的模型——称为QAmden——并评估它在多个多文档任务中的表现,包括多文档QA、总结和提问聚焦总结,取得了高达7%的改进,显著超越了零样本GPT-3.5和GPT-4。
URL
https://arxiv.org/abs/2305.15387