Abstract
This paper introduces Stochastic RAG--a novel approach for end-to-end optimization of retrieval-augmented generation (RAG) models that relaxes the simplifying assumptions of marginalization and document independence, made in most prior work. Stochastic RAG casts the retrieval process in RAG as a stochastic sampling without replacement process. Through this formulation, we employ straight-through Gumbel-top-k that provides a differentiable approximation for sampling without replacement and enables effective end-to-end optimization for RAG. We conduct extensive experiments on seven diverse datasets on a wide range of tasks, from open-domain question answering to fact verification to slot-filling for relation extraction and to dialogue systems. By applying this optimization method to a recent and effective RAG model, we advance state-of-the-art results on six out of seven datasets.
Abstract (translated)
本文介绍了一种名为Stochastic RAG的新方法,用于端到端优化检索增强生成(RAG)模型,该方法放宽了大多数先前的工作中的简化假设,即边际化和文档独立性假设。Stochastic RAG将检索过程在RAG中建模为无替换的随机采样过程。通过这种表示方法,我们使用 straight-through Gumbel-top-k,它为无替换采样提供了一个不同的导数近似,并有效地对RAG进行了端到端的优化。我们在包括开放域问题回答、事实验证、关系提取和对话系统等七种多样化的数据集上进行了广泛的实验。通过将这种优化方法应用于最近且有效的RAG模型,我们在六个数据集上超过了现有的最佳结果。
URL
https://arxiv.org/abs/2405.02816