Paper Reading AI Learner

Peek Across: Improving Multi-Document Modeling via Cross-Document Question-Answering

2023-05-24 17:48:40
Avi Caciularu, Matthew E. Peters, Jacob Goldberger, Ido Dagan, Arman Cohan

Abstract

The integration of multi-document pre-training objectives into language models has resulted in remarkable improvements in multi-document downstream tasks. In this work, we propose extending this idea by pre-training a generic multi-document model from a novel cross-document question answering pre-training objective. To that end, given a set (or cluster) of topically-related documents, we systematically generate semantically-oriented questions from a salient sentence in one document and challenge the model, during pre-training, to answer these questions while "peeking" into other topically-related documents. In a similar manner, the model is also challenged to recover the sentence from which the question was generated, again while leveraging cross-document information. This novel multi-document QA formulation directs the model to better recover cross-text informational relations, and introduces a natural augmentation that artificially increases the pre-training data. Further, unlike prior multi-document models that focus on either classification or summarization tasks, our pre-training objective formulation enables the model to perform tasks that involve both short text generation (e.g., QA) and long text generation (e.g., summarization). Following this scheme, we pre-train our model -- termed QAmden -- and evaluate its performance across several multi-document tasks, including multi-document QA, summarization, and query-focused summarization, yielding improvements of up to 7%, and significantly outperforms zero-shot GPT-3.5 and GPT-4.

Abstract (translated)

将多文档预训练目标融入语言模型,导致了多文档后续任务的重大改进。在这项工作中,我们建议扩展这个思想,通过预训练一个通用的多文档模型,从一个新的跨文档问答预训练目标开始。为此,给定一组(或簇)相关的文档,我们 systematic 地从一份文档中的一条引人注目的句子中生成语义相关的提问,并在预训练期间挑战模型回答这些问题,同时“窥探”其他相关的文档。类似地,模型也被挑战恢复生成的提问的句子,同时利用跨文档信息。这个新的多文档QA formulation指示模型更好地恢复跨文本信息关系,并引入了一种自然的增强,从而增加了预训练数据。此外,与以前的多文档模型专注于分类或总结任务不同,我们的预训练目标 formulation使模型能够同时涉及短文本生成(如QA)和长文本生成(如总结)的任务。按照这个方案,我们预训练我们的模型——称为QAmden——并评估它在多个多文档任务中的表现,包括多文档QA、总结和提问聚焦总结,取得了高达7%的改进,显著超越了零样本GPT-3.5和GPT-4。

URL

https://arxiv.org/abs/2305.15387

PDF

https://arxiv.org/pdf/2305.15387.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot