Abstract
Entity Alignment (EA) is vital for integrating diverse knowledge graph (KG) data, playing a crucial role in data-driven AI applications. Traditional EA methods primarily rely on comparing entity embeddings, but their effectiveness is constrained by the limited input KG data and the capabilities of the representation learning techniques. Against this backdrop, we introduce ChatEA, an innovative framework that incorporates large language models (LLMs) to improve EA. To address the constraints of limited input KG data, ChatEA introduces a KG-code translation module that translates KG structures into a format understandable by LLMs, thereby allowing LLMs to utilize their extensive background knowledge to improve EA accuracy. To overcome the over-reliance on entity embedding comparisons, ChatEA implements a two-stage EA strategy that capitalizes on LLMs' capability for multi-step reasoning in a dialogue format, thereby enhancing accuracy while preserving efficiency. Our experimental results affirm ChatEA's superior performance, highlighting LLMs' potential in facilitating EA tasks.
Abstract (translated)
实体对齐(EA)对于整合多样化的知识图(KG)数据,在数据驱动的人工智能应用中具有关键作用。传统的EA方法主要依赖比较实体嵌入,但它们的有效性受到有限输入KG数据和表示学习技术的限制。面对这一背景,我们引入了ChatEA,一种创新框架,它将大型语言模型(LLMs)集成其中,以改善EA。为了应对有限输入KG数据的限制,ChatEA引入了一个KG代码转换模块,将KG结构转换为LLMs可理解的格式,从而使LLMs能够利用其广泛的背景知识来提高EA的准确性。为了克服对实体嵌入比较的过度依赖,ChatEA实现了一种两阶段EA策略,利用LLMs在对话格式下进行多步推理的能力,从而在提高准确性的同时保持效率。我们的实验结果证实了ChatEA的优越性能,突出了LLMs在促进EA任务中的潜在作用。
URL
https://arxiv.org/abs/2402.15048