Abstract
This study explores the use of Large Language Models (LLMs) for automatic evaluation of knowledge graph (KG) completion models. Historically, validating information in KGs has been a challenging task, requiring large-scale human annotation at prohibitive cost. With the emergence of general-purpose generative AI and LLMs, it is now plausible that human-in-the-loop validation could be replaced by a generative agent. We introduce a framework for consistency and validation when using generative models to validate knowledge graphs. Our framework is based upon recent open-source developments for structural and semantic validation of LLM outputs, and upon flexible approaches to fact checking and verification, supported by the capacity to reference external knowledge sources of any kind. The design is easy to adapt and extend, and can be used to verify any kind of graph-structured data through a combination of model-intrinsic knowledge, user-supplied context, and agents capable of external knowledge retrieval.
Abstract (translated)
本研究探讨了使用大型语言模型(LLMs)自动评估知识图(KG)完成模型的应用。历史上,验证知识图中的有效信息是一个具有挑战性的任务,需要大规模的人类标注,代价高昂。随着通用生成式人工智能(GSA)和LLM的出现,现在可能用生成代理来代替人机交互验证。我们提出了一个使用生成模型验证知识图的一致性和验证框架。该框架基于LLM输出结构和语义验证的最近开源发展,以及支持外部知识来源访问的能力。该设计易于调整和扩展,可以通过模型固有知识、用户提供的上下文和支持外部知识检索的代理来验证任何类型的图状数据。
URL
https://arxiv.org/abs/2404.15923