Abstract
A Knowledge Graph (KG) is the directed graphical representation of entities and relations in the real world. KG can be applied in diverse Natural Language Processing (NLP) tasks where knowledge is required. The need to scale up and complete KG automatically yields Knowledge Graph Embedding (KGE), a shallow machine learning model that is suffering from memory and training time consumption issues. To mitigate the computational load, we propose a parameter-sharing method, i.e., using conjugate parameters for complex numbers employed in KGE models. Our method improves memory efficiency by 2x in relation embedding while achieving comparable performance to the state-of-the-art non-conjugate models, with faster, or at least comparable, training time. We demonstrated the generalizability of our method on two best-performing KGE models $5^{\bigstar}\mathrm{E}$ and $\mathrm{ComplEx}$ on five benchmark datasets.
Abstract (translated)
知识图(KG)是现实世界中实体和关系的有向图表示。KG 可以在各种自然语言处理(NLP)任务中应用,其中需要知识。对 KG 进行扩展和完成导致知识图嵌入(KGE),一种浅层机器学习模型,存在内存和训练时间消耗问题。为了减轻计算负担,我们提出了一个参数共享方法,即使用共轭参数来处理 KGE 模型中使用的复杂数。我们的方法通过在关系嵌入中实现 2x 的内存效率,同时实现与最先进的非共轭模型的性能相似,具有更快的训练时间。我们在五个基准数据集上证明了我们方法的泛化能力。 我们展示了在两个最佳表现的知识图嵌入模型 $5^{\bigstar}\mathrm{E}$ 和 $\mathrm{ComplEx}$ 上,我们方法的泛化能力。
URL
https://arxiv.org/abs/2404.11809