Abstract
The mission of open knowledge graph (KG) completion is to draw new findings from known facts. Existing works that augment KG completion require either (1) factual triples to enlarge the graph reasoning space or (2) manually designed prompts to extract knowledge from a pre-trained language model (PLM), exhibiting limited performance and requiring expensive efforts from experts. To this end, we propose TAGREAL that automatically generates quality query prompts and retrieves support information from large text corpora to probe knowledge from PLM for KG completion. The results show that TAGREAL achieves state-of-the-art performance on two benchmark datasets. We find that TAGREAL has superb performance even with limited training data, outperforming existing embedding-based, graph-based, and PLM-based methods.
Abstract (translated)
开放知识图谱(KG)的完成的使命是从已知的事实中得出新发现。现有用于增强KG完成的工作时,要么需要扩展图推理空间使用事实三元组,要么需要手动设计 prompts 以从训练好的语言模型(PLM)中提取知识,表现出性能有限,并需要专家昂贵的努力。为此,我们提出了TAGReal,它自动生成高质量的查询提示并从大型文本库中提取支持信息,以从PLM中获取知识,用于KG completion。结果表明,TAGReal在两个基准数据集上实现了最先进的性能。我们发现,即使训练数据有限,TAGReal仍然表现出优异的性能,优于现有的嵌入型、基于图和PLM的方法。
URL
https://arxiv.org/abs/2305.15597