Paper Reading AI Learner

Question Calibration and Multi-Hop Modeling for Temporal Question Answering

2024-02-20 17:56:24
Chao Xue, Di Liang, Pengfei Wang, Jing Zhang

Abstract

Many models that leverage knowledge graphs (KGs) have recently demonstrated remarkable success in question answering (QA) tasks. In the real world, many facts contained in KGs are time-constrained thus temporal KGQA has received increasing attention. Despite the fruitful efforts of previous models in temporal KGQA, they still have several limitations. (I) They adopt pre-trained language models (PLMs) to obtain question representations, while PLMs tend to focus on entity information and ignore entity transfer caused by temporal constraints, and finally fail to learn specific temporal representations of entities. (II) They neither emphasize the graph structure between entities nor explicitly model the multi-hop relationship in the graph, which will make it difficult to solve complex multi-hop question answering. To alleviate this problem, we propose a novel Question Calibration and Multi-Hop Modeling (QC-MHM) approach. Specifically, We first calibrate the question representation by fusing the question and the time-constrained concepts in KG. Then, we construct the GNN layer to complete multi-hop message passing. Finally, the question representation is combined with the embedding output by the GNN to generate the final prediction. Empirical results verify that the proposed model achieves better performance than the state-of-the-art models in the benchmark dataset. Notably, the Hits@1 and Hits@10 results of QC-MHM on the CronQuestions dataset's complex questions are absolutely improved by 5.1% and 1.2% compared to the best-performing baseline. Moreover, QC-MHM can generate interpretable and trustworthy predictions.

Abstract (translated)

许多利用知识图谱(KGs)的模型最近在问题回答(QA)任务中展示了令人瞩目的成功。在现实生活中,知识图中包含的许多事实都是时间约束的,因此时间限制下的KG问题得到了越来越多的关注。尽管之前在时间限制下的KG问题模型取得了一定的成果,但它们仍然存在一些局限性。(I)它们采用预训练语言模型(PLMs)来获得问题表示,而PLMs往往只关注实体信息并忽略了由时间约束引起的信息转移,最终无法学习实体特定的时间表示。(II)它们没有强调实体之间的图形结构,也没有明确建模图形中的多级关系,这将使得解决复杂多级问题变得更加困难。为了减轻这个问题,我们提出了一种新颖的问题校准和多级建模(QC-MHM)方法。具体来说,我们首先通过融合KG中的问题和时间约束概念进行初始化预处理。然后,我们构建了GNN层来完成多级信息传递。最后,通过GNN的嵌入输出将问题表示与嵌入结果相结合来生成最终预测。通过实验结果证实,与最先进的模型相比,所提出的模型在基准数据集上取得了更好的性能。值得注意的是,QC-MHM在CronQuestions数据集的复杂问题上的Hits@1和Hits@10结果分别提高了5.1%和1.2%。此外,QC-MHM还可以生成可解释且可靠的预测。

URL

https://arxiv.org/abs/2402.13188

PDF

https://arxiv.org/pdf/2402.13188.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot