Abstract
Graph Neural Networks (GNNs) are de facto solutions to structural data learning. However, it is susceptible to low-quality and unreliable structure, which has been a norm rather than an exception in real-world graphs. Existing graph structure learning (GSL) frameworks still lack robustness and interpretability. This paper proposes a general GSL framework, SE-GSL, through structural entropy and the graph hierarchy abstracted in the encoding tree. Particularly, we exploit the one-dimensional structural entropy to maximize embedded information content when auxiliary neighbourhood attributes are fused to enhance the original graph. A new scheme of constructing optimal encoding trees is proposed to minimize the uncertainty and noises in the graph whilst assuring proper community partition in hierarchical abstraction. We present a novel sample-based mechanism for restoring the graph structure via node structural entropy distribution. It increases the connectivity among nodes with larger uncertainty in lower-level communities. SE-GSL is compatible with various GNN models and enhances the robustness towards noisy and heterophily structures. Extensive experiments show significant improvements in the effectiveness and robustness of structure learning and node representation learning.
Abstract (translated)
图形神经网络(GNNs)实际上是结构数据学习的实际上的解决方案。然而,它容易被低质量和可靠的结构所影响,这在现实世界的图形中已成为一种常态,而不是一个例外。现有的图形结构学习框架(GSL)框架仍然缺乏稳定性和解释性。本文提出了一个通用的GSL框架,即SE-GSL,通过在编码树中抽象结构的熵和图形级数来实现。特别地,我们利用一维结构的熵来最大化嵌入信息 content,当辅助邻域属性融合以增强原始图形时。我们提出了一种新的方式来构建最优编码树,以最小化图形中的不确定和噪声,同时确保适当的社区分区在Hierarchical抽象中。我们提出了一种基于样本的结构熵分布机制,通过节点结构熵分布来恢复图形结构。它增加了高层次社区中节点之间的连通性。SE-GSL与各种GNN模型兼容,并增强对噪声和异质结构的鲁棒性。广泛的实验表明,结构学习和节点表示学习的有效性和鲁棒性得到了显著改善。
URL
https://arxiv.org/abs/2303.09778