Abstract
Representation learning in the form of semantic embeddings has been successfully applied to a variety of tasks in natural language processing and knowledge graphs. Recently, there has been growing interest in developing similar methods for learning embeddings of entire ontologies. We propose Box$^2$EL, a novel method for representation learning of ontologies in the Description Logic EL++, which represents both concepts and roles as boxes (i.e. axis-aligned hyperrectangles), such that the logical structure of the ontology is preserved. We theoretically prove the soundness of our model and conduct an extensive empirical evaluation, in which we achieve state-of-the-art results in subsumption prediction, link prediction, and deductive reasoning. As part of our evaluation, we introduce a novel benchmark for evaluating EL++ embedding models on predicting subsumptions involving both atomic and complex concepts.
Abstract (translated)
语义嵌入形式的表示学习已经成功应用于自然语言处理和知识图谱中的多种任务。最近,人们对开发类似的方法以学习整个实体知识的嵌入表示产生了越来越多的兴趣。我们提出了Box$^2$EL,一种在描述逻辑EL++中学习的实体知识表示学习新方法,它将概念和角色表示为矩形(即轴向伸展的矩形),从而保持了实体知识的逻辑结构。我们从理论上证明了我们的模型的可靠性,并进行了广泛的实证评估,在推断子依赖关系、连接预测和推理推断方面取得了最先进的结果。作为我们的评估的一部分,我们引入了一种新的基准,用于评估EL++嵌入模型,以预测涉及原子和复杂概念的子依赖关系。
URL
https://arxiv.org/abs/2301.11118