Abstract
We propose KGT5-context, a simple sequence-to-sequence model for link prediction (LP) in knowledge graphs (KG). Our work expands on KGT5, a recent LP model that exploits textual features of the KG, has small model size, and is scalable. To reach good predictive performance, however, KGT5 relies on an ensemble with a knowledge graph embedding model, which itself is excessively large and costly to use. In this short paper, we show empirically that adding contextual information - i.e., information about the direct neighborhood of a query vertex - alleviates the need for a separate KGE model to obtain good performance. The resulting KGT5-context model obtains state-of-the-art performance in our experimental study, while at the same time reducing model size significantly.
Abstract (translated)
我们提出KGT5-context,这是一种简单的序列到序列模型,用于知识图谱(KG)中的链接预测(LP)。我们的研究扩展了KGT5,这是一种最近使用的LP模型,利用KG中的文本特征,具有小型模型规模,并且可以扩展。然而,要获得良好的预测性能,KGT5依赖于一个知识图谱嵌入模型的集成,其本身过大且使用成本较高。在这篇论文中,我们经验证地表明,添加上下文信息(即查询顶点的直接邻居信息)可以消除使用单独的KGE模型获得良好性能的需求。因此,我们得到的KGT5-context模型在我们的实验研究中获得了最先进的性能,同时显著减少了模型规模。
URL
https://arxiv.org/abs/2305.13059