Abstract
Past decades have witnessed a great interest in the distinction and connection between neural network learning and kernel learning. Recent advancements have made theoretical progress in connecting infinite-wide neural networks and Gaussian processes. Two predominant approaches have emerged: the Neural Network Gaussian Process (NNGP) and the Neural Tangent Kernel (NTK). The former, rooted in Bayesian inference, represents a zero-order kernel, while the latter, grounded in the tangent space of gradient descents, is a first-order kernel. In this paper, we present the Unified Neural Kernel (UNK), which characterizes the learning dynamics of neural networks with gradient descents and parameter initialization. The proposed UNK kernel maintains the limiting properties of both NNGP and NTK, exhibiting behaviors akin to NTK with a finite learning step and converging to NNGP as the learning step approaches infinity. Besides, we also theoretically characterize the uniform tightness and learning convergence of the UNK kernel, providing comprehensive insights into this unified kernel. Experimental results underscore the effectiveness of our proposed method.
Abstract (translated)
在过去的几十年里,对神经网络学习和核学习之间的区别和联系产生了浓厚的兴趣。最近的发展使得无限宽神经网络和 Gaussian 过程之间的理论联系更加明确。出现了两种主要方法:神经网络核函数(NNGP)和神经 tangent 核函数(NTK)。前者的根源在于贝叶斯推理,代表零阶核函数;而后者则根植于梯度下降的 tangent 空间,代表一阶核函数。在本文中,我们提出了统一神经核函数(UNK),它描述了使用梯度下降和参数初始化的神经网络的学习动态。所提出的 UNK 核函数保持 NNGP 和 NTK 的极限性质,表现出类似于 NTK 的有限学习步数和当学习步数趋近于无穷大时趋近于 NNGP 的行为。此外,我们还理论地刻画了 UNK 核的均匀紧缩性和学习收敛性,为统一的核函数提供了全面的洞察。实验结果证实了我们所提出方法的有效性。
URL
https://arxiv.org/abs/2403.17467