Abstract
We present an empirical study of groundedness in long-form question answering (LFQA) by retrieval-augmented large language models (LLMs). In particular, we evaluate whether every generated sentence is grounded in the retrieved documents or the model's pre-training data. Across 3 datasets and 4 model families, our findings reveal that a significant fraction of generated sentences are consistently ungrounded, even when those sentences contain correct ground-truth answers. Additionally, we examine the impacts of factors such as model size, decoding strategy, and instruction tuning on groundedness. Our results show that while larger models tend to ground their outputs more effectively, a significant portion of correct answers remains compromised by hallucinations. This study provides novel insights into the groundedness challenges in LFQA and underscores the necessity for more robust mechanisms in LLMs to mitigate the generation of ungrounded content.
Abstract (translated)
我们提出了一个关于长篇问题回答(LFQA)中结实性(groundedness)的实证研究,使用了检索增强的大型语言模型(LLMs)。特别是,我们评估了每个生成的句子是否基于检索到的文档或模型的预训练数据。在3个数据集和4个模型家族上,我们的研究结果表明,即使包含正确答案,生成的句子中仍然存在着显著的未结实部分。此外,我们研究了因素(如模型大小,解码策略和指令调整)对结实性的影响。我们的结果表明,虽然较大的模型往往能够更有效地将输出结实化,但仍有相当比例的正确答案受到幻觉的影响。这项研究为LFQA中结实性挑战提供了新的见解,并强调了在LLMs中需要更稳健的机制来防止生成无结实的内容。
URL
https://arxiv.org/abs/2404.07060