Abstract
While hallucinations of large language models could been alleviated through retrieval-augmented generation and citation generation, how the model utilizes internal knowledge is still opaque, and the trustworthiness of its generated answers remains questionable. In this work, we introduce Context-Prior Augmented Citation Generation task, requiring models to generate citations considering both external and internal knowledge while providing trustworthy references, with 5 evaluation metrics focusing on 3 aspects: answer helpfulness, citation faithfulness, and trustworthiness. We introduce RAEL, the paradigm for our task, and also design INTRALIGN, an integrated method containing customary data generation and an alignment algorithm. Our experimental results show that our method achieves a better cross-scenario performance with regard to other baselines. Our extended experiments further reveal that retrieval quality, question types, and model knowledge have considerable influence on the trustworthiness in citation generation.
Abstract (translated)
虽然通过检索增强生成和引文生成可以缓解大型语言模型的幻觉问题,但模型如何利用内部知识仍然不透明,其生成答案的可信度也仍存疑。在这项工作中,我们引入了“上下文先验增强引用生成”任务,要求模型在提供可靠参考时同时考虑内外部知识,并针对三个方面:答案有用性、引文忠实性和可信度设计了5个评估指标。我们提出了RAEL这一新范式来执行我们的任务,并设计了一种综合方法INTRALIGN,该方法包含常规数据生成和对齐算法。实验结果表明,相较于其他基准模型,我们的方法在跨场景性能方面表现更佳。进一步的扩展实验还揭示出检索质量、问题类型以及模型知识对于引文生成中的可信度具有显著影响。
URL
https://arxiv.org/abs/2504.14856