Abstract
Augmenting language models with a retrieval mechanism has been shown to significantly improve their performance while keeping the number of parameters low. Retrieval-augmented models commonly rely on a semantic retrieval mechanism based on the similarity between dense representations of the query chunk and potential neighbors. In this paper, we study the state-of-the-art Retro model and observe that its performance gain is better explained by surface-level similarities, such as token overlap. Inspired by this, we replace the semantic retrieval in Retro with a surface-level method based on BM25, obtaining a significant reduction in perplexity. As full BM25 retrieval can be computationally costly for large datasets, we also apply it in a re-ranking scenario, gaining part of the perplexity reduction with minimal computational overhead.
Abstract (translated)
增加语言模型的检索机制可以显著改善其性能,同时保持参数数量较少。检索增强模型通常依赖于基于查询分块和潜在邻居的密集表示之间的语义检索机制。在本文中,我们研究了最先进的 Retro 模型,并观察到其性能增益更好地可以用表面相似性,例如 token 重叠等解释。受此启发,我们替换了 Retro 中的语义检索机制,基于 BM25 的表面方法,取得了显著的去混淆效果。由于对于大型数据集而言,完全BM25检索的计算成本很高,我们也在重新排序场景中应用了它,通过最小计算代价获得了去混淆部分的性能提升。
URL
https://arxiv.org/abs/2305.16243