Abstract
Recently decades have witnessed the empirical success of framing Knowledge Graph (KG) embeddings via language models. However, language model-based KG embeddings are usually deployed as static artifacts, which are challenging to modify without re-training after deployment. To address this issue, we propose a new task of editing language model-based KG embeddings in this paper. The proposed task aims to enable data-efficient and fast updates to KG embeddings without damaging the performance of the rest. We build four new datasets: E-FB15k237, A-FB15k237, E-WN18RR, and A-WN18RR, and evaluate several knowledge editing baselines demonstrating the limited ability of previous models to handle the proposed challenging task. We further propose a simple yet strong baseline dubbed KGEditor, which utilizes additional parametric layers of the hyper network to edit/add facts. Comprehensive experimental results demonstrate that KGEditor can perform better when updating specific facts while not affecting the rest with low training resources. Code and datasets will be available in this https URL.
Abstract (translated)
近年来,通过语言模型框架构建知识图(KG)嵌入取得了经验性成功。然而,基于语言模型的KG嵌入通常部署为静态组件,在部署后难以修改,无需再次训练。为了解决这个问题,我们提出了一个新的任务,即编辑基于语言模型的KG嵌入。该任务旨在通过高效的数据更新来避免损害其他性能。我们构建了一个新的数据集,包括四个数据集:E-FB15k237、A-FB15k237、E-WN18RR和A-WN18RR,并评估了多个知识编辑基准,证明以前的模型难以处理我们提出的挑战性任务。我们还提出了一个名为KGEditor的简单但强大的基准,它使用超网络额外的参数层来编辑/添加事实。综合的实验结果表明,在更新特定事实时,KGEditor能够在低训练资源的情况下表现更好。代码和数据集将放在这个httpsURL上。
URL
https://arxiv.org/abs/2301.10405