Abstract
Graph Neural Networks (GNNs) have excelled in learning from graph-structured data, especially in understanding the relationships within a single graph, i.e., intra-graph relationships. Despite their successes, GNNs are limited by neglecting the context of relationships across graphs, i.e., inter-graph relationships. Recognizing the potential to extend this capability, we introduce Relating-Up, a plug-and-play module that enhances GNNs by exploiting inter-graph relationships. This module incorporates a relation-aware encoder and a feedback training strategy. The former enables GNNs to capture relationships across graphs, enriching relation-aware graph representation through collective context. The latter utilizes a feedback loop mechanism for the recursively refinement of these representations, leveraging insights from refining inter-graph dynamics to conduct feedback loop. The synergy between these two innovations results in a robust and versatile module. Relating-Up enhances the expressiveness of GNNs, enabling them to encapsulate a wider spectrum of graph relationships with greater precision. Our evaluations across 16 benchmark datasets demonstrate that integrating Relating-Up into GNN architectures substantially improves performance, positioning Relating-Up as a formidable choice for a broad spectrum of graph representation learning tasks.
Abstract (translated)
图神经网络(GNNs)在处理图结构数据方面表现出色,尤其是在理解单个图中节点之间的关系,即 intra-graph 关系。尽管它们取得了成功,但GNNs 的局限在于忽略了图中关系之间的上下文,即 inter-graph 关系。为了扩展这种能力,我们引入了关系增强模块(Relating-Up),这是一种可插拔的模块,通过利用 inter-graph 关系增强了GNNs。该模块包括关系感知编码器和一个反馈训练策略。前一个策略使GNNs能够捕捉图形之间的关系,通过集体上下文丰富关系感知的图表示。后一个策略利用反馈循环机制对这些表示进行递归优化,并利用改进 inter-graph 动态的见解进行反馈循环。这两个创新之间的协同作用导致了一个稳健且多功能的模块。关系增强使GNNs 的表达力更加出色,使它们能够更精确地封装更广泛的图形关系。我们在16个基准数据集上的评估表明,将关系增强模块集成到GNN架构中会极大地提高性能,将关系增强定位为各种图形表示学习任务的出色选择。
URL
https://arxiv.org/abs/2405.03950