Abstract
Graph Neural Networks (GNNs) have demonstrated state-of-the-art performance in various graph representation learning tasks. Recently, studies revealed their vulnerability to adversarial attacks. In this work, we theoretically define the concept of expected robustness in the context of attributed graphs and relate it to the classical definition of adversarial robustness in the graph representation learning literature. Our definition allows us to derive an upper bound of the expected robustness of Graph Convolutional Networks (GCNs) and Graph Isomorphism Networks subject to node feature attacks. Building on these findings, we connect the expected robustness of GNNs to the orthonormality of their weight matrices and consequently propose an attack-independent, more robust variant of the GCN, called the Graph Convolutional Orthonormal Robust Networks (GCORNs). We further introduce a probabilistic method to estimate the expected robustness, which allows us to evaluate the effectiveness of GCORN on several real-world datasets. Experimental experiments showed that GCORN outperforms available defense methods. Our code is publicly available at: \href{this https URL}{this https URL}.
Abstract (translated)
图形神经网络(GNNs)在各种图表示学习任务中展示了最先进的性能。最近的研究表明,它们对对抗攻击非常脆弱。在本文中,我们理论性地定义了在属性图背景下 expected robustness 的概念,并将其与图表示学习文献中的经典对抗鲁棒性定义联系起来。我们的定义允许我们推导出 Graph Convolutional Networks (GCNs) 和 Graph Isomorphism Networks subject to node feature attacks 的预期鲁棒性的上界。基于这些发现,我们将 GNNs 的预期鲁棒性与它们的权重矩阵的正交性联系起来,进而提出了一个攻击-独立、更鲁棒的 GCN 变体,称为 Graph Convolutional Orthonormal Robust Networks (GCORNs)。我们还引入了一种概率方法来估计预期鲁棒性,使我们能够评估 GCORN 在多个现实世界数据集上的效果。实验实验表明 GCORN 超过了可用的防御方法。我们的代码公开可用:\href{this <https:// this URL> }{this <https:// this URL>}.
URL
https://arxiv.org/abs/2404.17947