Abstract
Knowledge graph completion (KGC) aims to alleviate the inherent incompleteness of knowledge graphs (KGs), which is a critical task for various applications, such as recommendations on the web. Although knowledge graph embedding (KGE) models have demonstrated superior predictive performance on KGC tasks, these models infer missing links in a black-box manner that lacks transparency and accountability, preventing researchers from developing accountable models. Existing KGE-based explanation methods focus on exploring key paths or isolated edges as explanations, which is information-less to reason target prediction. Additionally, the missing ground truth leads to these explanation methods being ineffective in quantitatively evaluating explored explanations. To overcome these limitations, we propose KGExplainer, a model-agnostic method that identifies connected subgraph explanations and distills an evaluator to assess them quantitatively. KGExplainer employs a perturbation-based greedy search algorithm to find key connected subgraphs as explanations within the local structure of target predictions. To evaluate the quality of the explored explanations, KGExplainer distills an evaluator from the target KGE model. By forwarding the explanations to the evaluator, our method can examine the fidelity of them. Extensive experiments on benchmark datasets demonstrate that KGExplainer yields promising improvement and achieves an optimal ratio of 83.3% in human evaluation.
Abstract (translated)
知识图完成(KGC)旨在解决知识图(KG)固有的不完整性,这对于各种应用(如在线推荐)至关重要。尽管基于知识图嵌入(KGE)的模型在KGC任务中表现出了卓越的预测性能,但这些模型以黑盒方式推断缺失链接,缺乏透明度和责任,阻碍了研究人员开发可负责任的模型。现有的KGE基于解释方法集中于探索关键路径或离散的边缘作为解释,这是对目标预测信息不足的推理。此外,缺失的标注真相导致这些解释方法在定量评估探索的解释方面变得无效。为了克服这些限制,我们提出了KGExplainer,一种模型无关的方法,它识别出目标预测的连接子图解释,并将其评估为质量。KGExplainer采用基于扰动的贪心搜索算法在目标预测的局部结构中查找关键连接子图作为解释。为了评估探索的解释的质量,KGExplainer从目标KGE模型中提取评估者。通过将解释向前传递给评估者,我们的方法可以检查它们的可靠性。在基准数据集上进行的大量实验证明,KGExplainer取得了改进,并实现了人类评估的83.3%的最优比例。
URL
https://arxiv.org/abs/2404.03893