Abstract
Linking a claim to grounded references is a critical ability to fulfill human demands for authentic and reliable information. Current studies are limited to specific tasks like information retrieval or semantic matching, where the claim-reference relationships are unique and fixed, while the referential knowledge linking (RKL) in real-world can be much more diverse and complex. In this paper, we propose universal referential knowledge linking (URL), which aims to resolve diversified referential knowledge linking tasks by one unified model. To this end, we propose a LLM-driven task-instructed representation compression, as well as a multi-view learning approach, in order to effectively adapt the instruction following and semantic understanding abilities of LLMs to referential knowledge linking. Furthermore, we also construct a new benchmark to evaluate ability of models on referential knowledge linking tasks across different scenarios. Experiments demonstrate that universal RKL is challenging for existing approaches, while the proposed framework can effectively resolve the task across various scenarios, and therefore outperforms previous approaches by a large margin.
Abstract (translated)
将一个论点与实际引用联系起来是满足人们对真实和可靠信息的需求的关键能力。目前的研究仅限于信息检索或语义匹配等特定任务,其中论点与引用关系是独特的和固定的,而现实世界中的引用知识链接(RKL)可以非常多样化和复杂。在本文中,我们提出了通用引用知识链接(URL),旨在通过一个统一的模型解决多样化的引用知识链接任务。为此,我们提出了一个LLM驱动的任务指令表示压缩以及多视角学习方法,以有效适应LLM的指令跟随和语义理解能力。此外,我们还构建了一个新的基准来评估模型在引用知识链接任务中的能力,以评估模型在不同场景下的引用知识链接能力。实验证明,现有方法的通用RKL非常具有挑战性,而所提出的框架可以有效地解决各种场景下的任务,因此极大地超越了以前的方法。
URL
https://arxiv.org/abs/2404.16248