Abstract
Complex logical query answering (CLQA) in knowledge graphs (KGs) goes beyond simple KG completion and aims at answering compositional queries comprised of multiple projections and logical operations. Existing CLQA methods that learn parameters bound to certain entity or relation vocabularies can only be applied to the graph they are trained on which requires substantial training time before being deployed on a new graph. Here we present UltraQuery, an inductive reasoning model that can zero-shot answer logical queries on any KG. The core idea of UltraQuery is to derive both projections and logical operations as vocabulary-independent functions which generalize to new entities and relations in any KG. With the projection operation initialized from a pre-trained inductive KG reasoning model, UltraQuery can solve CLQA on any KG even if it is only finetuned on a single dataset. Experimenting on 23 datasets, UltraQuery in the zero-shot inference mode shows competitive or better query answering performance than best available baselines and sets a new state of the art on 14 of them.
Abstract (translated)
知识图(KG)中的复杂逻辑查询(CLQA)超越了简单的KG完成,旨在回答由多个投影和逻辑操作组成的复合查询。现有的CLQA方法只能应用于它们所训练的图,这需要在新图上进行大量训练时间才能部署。在这里,我们提出了UltraQuery,一种归纳推理模型,可以在任何KG上零 shots地回答逻辑查询。UltraQuery的核心思想是将投影和逻辑操作作为独立于词汇的函数,扩展到任何KG中的新实体和关系。通过从预训练的归纳KG推理模型中初始化投影操作,UltraQuery可以在仅针对单个数据集微调的情况下解决CLQA。在23个数据集上的实验表明,UltraQuery在零 shot推理模式下具有与最佳现有基线竞争或更好的查询回答性能,其中14个数据集的性能达到了当前最先进的水平。
URL
https://arxiv.org/abs/2404.07198