Abstract
We present a light-weight approach for detecting nonfactual outputs from retrieval-augmented generation (RAG). Given a context and putative output, we compute a factuality score that can be thresholded to yield a binary decision to check the results of LLM-based question-answering, summarization, or other systems. Unlike factuality checkers that themselves rely on LLMs, we use compact, open-source natural language inference (NLI) models that yield a freely accessible solution with low latency and low cost at run-time, and no need for LLM fine-tuning. The approach also enables downstream mitigation and correction of hallucinations, by tracing them back to specific context chunks. Our experiments show high area under the ROC curve (AUC) across a wide range of relevant open source datasets, indicating the effectiveness of our method for fact-checking RAG output.
Abstract (translated)
我们提出了一种轻量级的方法,用于检测检索增强生成(RAG)中的非事实性输出。给定一个上下文和假设的输出,我们会计算一个事实得分,该得分可以通过设定阈值来产生二元决策,以检查基于大语言模型(LLM)的问题回答、摘要或其他系统的输出结果。与依赖于大语言模型的事实核查器不同,我们使用紧凑且开源的自然语言推理(NLI)模型,这提供了一个低延迟和低成本的解决方案,并在运行时无需对大语言模型进行微调。该方法还可以通过追溯到具体的上下文片段来实现下游幻觉的缓解和修正。我们的实验显示,在一系列相关的开源数据集上具有高ROC曲线下的面积(AUC),表明这种方法对于检测RAG输出的事实性是有效的。
URL
https://arxiv.org/abs/2411.01022