Abstract
We introduce REPLUG, a retrieval-augmented language modeling framework that treats the language model (LM) as a black box and augments it with a tuneable retrieval model. Unlike prior retrieval-augmented LMs that train language models with special cross attention mechanisms to encode the retrieved text, REPLUG simply prepends retrieved documents to the input for the frozen black-box LM. This simple design be can easily applied to any existing retrieval and language models. Furthermore, we show that the LM can be used to supervise the retrieval model, which can then find documents that help the LM make better predictions. Our experiments demonstrate that REPLUG with the tuned retriever significantly improves the performance of GPT-3 (175B) on language modeling by 6.3%, as well as the performance of Codex on five-shot MMLU by 5.1%.
Abstract (translated)
我们引入了 REPLUG 一个检索增强的语言建模框架,该框架将语言模型(LM)视为黑盒子,并添加了可调节的检索模型。与之前的检索增强的LM不同, REPLUG 只是将检索文档添加到冻结的黑盒LM输入的前缀中。这种简单的设计可以很容易地应用于任何现有的检索和语言模型。此外,我们表明,LM 可以用于监督检索模型,然后找到帮助 LM 进行更好的预测的文档。我们的实验表明,使用调整的检索器, REPLUG 显著改善了 GPT-3(175B)在语言建模方面的性能,提高了 6.3%,同时 Codex 在五次元MMLU上的性能也提高了 5.1%。
URL
https://arxiv.org/abs/2301.12652