Abstract
Graph neural networks are becoming increasingly popular in the field of machine learning due to their unique ability to process data structured in graphs. They have also been applied in safety-critical environments where perturbations inherently occur. However, these perturbations require us to formally verify neural networks before their deployment in safety-critical environments as neural networks are prone to adversarial attacks. While there exists research on the formal verification of neural networks, there is no work verifying the robustness of generic graph convolutional network architectures with uncertainty in the node features and in the graph structure over multiple message-passing steps. This work addresses this research gap by explicitly preserving the non-convex dependencies of all elements in the underlying computations through reachability analysis with (matrix) polynomial zonotopes. We demonstrate our approach on three popular benchmark datasets.
Abstract (translated)
由于其处理数据结构为图的能力,图形神经网络在机器学习领域变得越来越受欢迎。它们还应用于安全性至关重要的环境中,在这些环境中,扰动是固有的。然而,这些扰动需要我们在将神经网络部署在安全性至关重要的环境中之前,正式验证神经网络。虽然关于神经网络的正式验证的研究已经存在,但是没有研究验证具有不确定性节点特征和图结构的通用图形卷积网络架构的鲁棒性。本文通过采用(矩阵)多项式界来保留计算背后的所有元素的凹凸依赖,解决这一研究空白。我们在三个流行的基准数据集上证明了我们的方法。
URL
https://arxiv.org/abs/2404.15065