Abstract
Graph neural networks have inherent representational limitations due to their message-passing structure. Recent work has suggested that these limitations can be overcome by using unique node identifiers (UIDs). Here we argue that despite the advantages of UIDs, one of their disadvantages is that they lose the desirable property of permutation-equivariance. We thus propose to focus on UID models that are permutation-equivariant, and present theoretical arguments for their advantages. Motivated by this, we propose a method to regularize UID models towards permutation equivariance, via a contrastive loss. We empirically demonstrate that our approach improves generalization and extrapolation abilities while providing faster training convergence. On the recent BREC expressiveness benchmark, our proposed method achieves state-of-the-art performance compared to other random-based approaches.
Abstract (translated)
图神经网络由于其消息传递结构而具有内在的表示限制。最近的研究表明,这些限制可以通过使用唯一节点标识符(UIDs)来克服。然而,我们认为尽管UIDs有优势,但它们的一个缺点是失去了置换等变性这一理想特性。因此,我们建议关注那些保持置换等变性的UID模型,并提出了其优点的理论论据。受到这一点的启发,我们提出了一种通过对比损失来正则化UID模型以使其趋向于置换等变性的方法。实验结果显示,我们的方法提升了泛化和外推能力,同时提供了更快的训练收敛速度。在最近的BREC表达力基准测试中,与其它基于随机的方法相比,我们提出的方法达到了最先进的性能。
URL
https://arxiv.org/abs/2411.02271