Abstract
We prove that overparametrized neural networks are able to generalize with a test error that is independent of the level of overparametrization, and independent of the Vapnik-Chervonenkis (VC) dimension. We prove explicit bounds that only depend on the metric geometry of the test and training sets, on the regularity properties of the activation function, and on the operator norms of the weights and norms of biases. For overparametrized deep ReLU networks with a training sample size bounded by the input space dimension, we explicitly construct zero loss minimizers without use of gradient descent, and prove that the generalization error is independent of the network architecture.
Abstract (translated)
我们证明了过度参数化的神经网络能够在测试误差上实现泛化,该误差与过度参数化的程度无关,并且不受Vapnik-Chervonenkis (VC) 维度的影响。我们给出了具体依赖于测试集和训练集的测地几何、激活函数的正则性性质以及权重算子范数及偏置范数的显式界。对于输入空间维度受限的过度参数化深度ReLU网络,我们在不使用梯度下降的情况下明确构造出零损失最小值,并证明了泛化误差与网络架构无关。
URL
https://arxiv.org/abs/2504.05695