Paper Reading AI Learner

Arbitrary Time Information Modeling via Polynomial Approximation for Temporal Knowledge Graph Embedding

2024-05-01 07:27:04
Zhiyu Fang, Jingyan Qin, Xiaobin Zhu, Chun Yang, Xu-Cheng Yin

Abstract

Distinguished from traditional knowledge graphs (KGs), temporal knowledge graphs (TKGs) must explore and reason over temporally evolving facts adequately. However, existing TKG approaches still face two main challenges, i.e., the limited capability to model arbitrary timestamps continuously and the lack of rich inference patterns under temporal constraints. In this paper, we propose an innovative TKGE method (PTBox) via polynomial decomposition-based temporal representation and box embedding-based entity representation to tackle the above-mentioned problems. Specifically, we decompose time information by polynomials and then enhance the model's capability to represent arbitrary timestamps flexibly by incorporating the learnable temporal basis tensor. In addition, we model every entity as a hyperrectangle box and define each relation as a transformation on the head and tail entity boxes. The entity boxes can capture complex geometric structures and learn robust representations, improving the model's inductive capability for rich inference patterns. Theoretically, our PTBox can encode arbitrary time information or even unseen timestamps while capturing rich inference patterns and higher-arity relations of the knowledge base. Extensive experiments on real-world datasets demonstrate the effectiveness of our method.

Abstract (translated)

与传统知识图(KGs)相比,时间知识图(TKGs)必须充分探索和推理随时间变化的事实。然而,现有的TKG方法仍然面临着两个主要挑战,即无法连续建模任意时间戳以及缺乏在时间约束下丰富的推理模式。在本文中,我们提出了一个创新的时间知识图(TKGE)方法(PTBox),通过基于多项式的时本表示和基于箱嵌入的实体表示来解决上述问题。具体来说,我们通过多项式分解来分解时间信息,然后通过学习的时间本张量增强模型的能力来表示任意时间戳。此外,我们将每个实体建模为一个超矩形框,将每个关系建模为一个变换,该变换作用于头和尾实体框。实体框可以捕捉复杂的几何结构,并学习稳健的表示,提高模型对于复杂推理模式的归纳能力。从理论上将我们的PTBox可以编码任意时间信息或甚至未知的时刻,同时捕捉知识库中的丰富推理模式和高阶关系。在现实世界的数据集上进行大量实验证明了我们方法的有效性。

URL

https://arxiv.org/abs/2405.00358

PDF

https://arxiv.org/pdf/2405.00358.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot