Paper Reading AI Learner

Relating-Up: Advancing Graph Neural Networks through Inter-Graph Relationships

2024-05-07 02:16:54
Qi Zou, Na Yu, Daoliang Zhang, Wei Zhang, Rui Gao

Abstract

Graph Neural Networks (GNNs) have excelled in learning from graph-structured data, especially in understanding the relationships within a single graph, i.e., intra-graph relationships. Despite their successes, GNNs are limited by neglecting the context of relationships across graphs, i.e., inter-graph relationships. Recognizing the potential to extend this capability, we introduce Relating-Up, a plug-and-play module that enhances GNNs by exploiting inter-graph relationships. This module incorporates a relation-aware encoder and a feedback training strategy. The former enables GNNs to capture relationships across graphs, enriching relation-aware graph representation through collective context. The latter utilizes a feedback loop mechanism for the recursively refinement of these representations, leveraging insights from refining inter-graph dynamics to conduct feedback loop. The synergy between these two innovations results in a robust and versatile module. Relating-Up enhances the expressiveness of GNNs, enabling them to encapsulate a wider spectrum of graph relationships with greater precision. Our evaluations across 16 benchmark datasets demonstrate that integrating Relating-Up into GNN architectures substantially improves performance, positioning Relating-Up as a formidable choice for a broad spectrum of graph representation learning tasks.

Abstract (translated)

图神经网络(GNNs)在处理图结构数据方面表现出色,尤其是在理解单个图中节点之间的关系,即 intra-graph 关系。尽管它们取得了成功,但GNNs 的局限在于忽略了图中关系之间的上下文,即 inter-graph 关系。为了扩展这种能力,我们引入了关系增强模块(Relating-Up),这是一种可插拔的模块,通过利用 inter-graph 关系增强了GNNs。该模块包括关系感知编码器和一个反馈训练策略。前一个策略使GNNs能够捕捉图形之间的关系,通过集体上下文丰富关系感知的图表示。后一个策略利用反馈循环机制对这些表示进行递归优化,并利用改进 inter-graph 动态的见解进行反馈循环。这两个创新之间的协同作用导致了一个稳健且多功能的模块。关系增强使GNNs 的表达力更加出色,使它们能够更精确地封装更广泛的图形关系。我们在16个基准数据集上的评估表明,将关系增强模块集成到GNN架构中会极大地提高性能,将关系增强定位为各种图形表示学习任务的出色选择。

URL

https://arxiv.org/abs/2405.03950

PDF

https://arxiv.org/pdf/2405.03950.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot