Paper Reading AI Learner

Query Structure Modeling for Inductive Logical Reasoning Over Knowledge Graphs

2023-05-23 01:25:29
Siyuan Wang, Zhongyu Wei, Meng Han, Zhihao Fan, Haijun Shan, Qi Zhang, Xuanjing Huang

Abstract

Logical reasoning over incomplete knowledge graphs to answer complex logical queries is a challenging task. With the emergence of new entities and relations in constantly evolving KGs, inductive logical reasoning over KGs has become a crucial problem. However, previous PLMs-based methods struggle to model the logical structures of complex queries, which limits their ability to generalize within the same structure. In this paper, we propose a structure-modeled textual encoding framework for inductive logical reasoning over KGs. It encodes linearized query structures and entities using pre-trained language models to find answers. For structure modeling of complex queries, we design stepwise instructions that implicitly prompt PLMs on the execution order of geometric operations in each query. We further separately model different geometric operations (i.e., projection, intersection, and union) on the representation space using a pre-trained encoder with additional attention and maxout layers to enhance structured modeling. We conduct experiments on two inductive logical reasoning datasets and three transductive datasets. The results demonstrate the effectiveness of our method on logical reasoning over KGs in both inductive and transductive settings.

Abstract (translated)

在不完整的知识图谱上进行逻辑推理以回答复杂的逻辑查询是一项挑战性的任务。随着不断进化的KGs中新实体和新关系的出现,对KG上的基于归纳的逻辑推理 became a crucial问题。然而,以前的基于PLM的方法 struggle 来建模复杂的查询逻辑结构,这限制了他们在同结构内泛化的能力。在本文中,我们提出了一种基于结构建模的文字编码框架,以处理KG上的基于归纳的逻辑推理。它使用预先训练的语言模型来线性化查询结构和实体,以找到答案。为了建模复杂的查询结构,我们设计了一系列步骤指令,暗示PLM在每个查询中的几何操作执行顺序。我们还使用一个带有额外注意力和最大输出层的预先训练编码器来分别建模不同的几何操作,以增强结构建模。我们研究了两个基于归纳的逻辑推理数据集和三个基于转换的数据集。结果证明了我们在基于和基于转换 settings 下对KG上的逻辑推理推理的有效性。

URL

https://arxiv.org/abs/2305.13585

PDF

https://arxiv.org/pdf/2305.13585.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot