Abstract
Modern CNNs are learning the weights of vast numbers of convolutional operators. In this paper, we raise the fundamental question if this is actually necessary. We show that even in the extreme case of only randomly initializing and never updating spatial filters, certain CNN architectures can be trained to surpass the accuracy of standard training. By reinterpreting the notion of pointwise ($1\times 1$) convolutions as an operator to learn linear combinations (LC) of frozen (random) spatial filters, we are able to analyze these effects and propose a generic LC convolution block that allows tuning of the linear combination rate. Empirically, we show that this approach not only allows us to reach high test accuracies on CIFAR and ImageNet but also has favorable properties regarding model robustness, generalization, sparsity, and the total number of necessary weights. Additionally, we propose a novel weight sharing mechanism, which allows sharing of a single weight tensor between all spatial convolution layers to massively reduce the number of weights.
Abstract (translated)
现代卷积神经网络正在学习大量卷积操作的重量。在本文中,我们提出了一个根本性的问题:这种方法是否真的必要。我们展示了即使在仅随机初始化并从未更新空间滤波器的情况下,某些卷积神经网络架构也可以训练超过标准训练的精度。通过将点积卷积视为一种操作,以学习冻结(随机)空间滤波器的线性组合(LC),我们能够分析这些效果并提出通用的LC卷积块,以调整线性组合率。经验证,我们表明,这种方法不仅使我们在CIFAR和ImageNet上达到高精度,而且还在模型鲁棒性、泛化、稀疏性和必要权重总数方面具有有利的属性。此外,我们提出了一种新颖的权重共享机制,它允许在所有空间卷积层之间共享一个权重矩阵,以大规模减少权重的数量。
URL
https://arxiv.org/abs/2301.11360