Abstract
Recent studies show that sentence-level extractive QA, i.e., based on Answer Sentence Selection (AS2), is outperformed by Generation-based QA (GenQA) models, which generate answers using the top-k answer sentences ranked by AS2 models (a la retrieval-augmented generation style). In this paper, we propose a novel training paradigm for GenQA using supervision from automatic QA evaluation models (GAVA). Specifically, we propose three strategies to transfer knowledge from these QA evaluation models to a GenQA model: (i) augmenting training data with answers generated by the GenQA model and labelled by GAVA (either statically, before training, or (ii) dynamically, at every training epoch); and (iii) using the GAVA score for weighting the generator loss during the learning of the GenQA model. We evaluate our proposed methods on two academic and one industrial dataset, obtaining a significant improvement in answering accuracy over the previous state of the art.
Abstract (translated)
最近的研究表明,基于答案句子选择(AS2)的句级提取式QA(GenQA)模型比基于生成式的QA(GenQA)模型表现更好。GenQA模型使用基于AS2模型的排名前K的答案句子(类似于检索增强的生成风格)来生成答案。在本文中,我们提出了一种使用自动QA评估模型(GAVA)监督的GenQA模型的新训练范式。具体来说,我们提出了三种策略来将这些QA评估模型的知识转移到GenQA模型中:(i) 在训练数据中添加由GenQA模型生成的答案并标记为GAVA( either静态地,在训练之前,或动态地,在每个训练 epoch 时);(ii) 在每个训练 epoch 时动态地使用GAVA得分加权生成损失;(iii) 使用GAVA得分来重测GenQA模型的学习。我们评估了我们提出的这些方法,在两个学术和一个工业数据集上进行了测试,取得了回答准确性方面的重大改进,超越了以前的技术水平。
URL
https://arxiv.org/abs/2305.15344