Abstract
Pretraining a neural network on a large dataset is becoming a cornerstone in machine learning that is within the reach of only a few communities with large-resources. We aim at an ambitious goal of democratizing pretraining. Towards that goal, we train and release a single neural network that can predict high quality ImageNet parameters of other neural networks. By using predicted parameters for initialization we are able to boost training of diverse ImageNet models available in PyTorch. When transferred to other datasets, models initialized with predicted parameters also converge faster and reach competitive final performance.
Abstract (translated)
在大型数据集上训练神经网络已经成为机器学习中一个关键的基石,只有少数拥有大量资源的社区能够触及。我们的目标是实现一个雄心勃勃的目标,即民主化预训练。为了实现这个目标,我们训练并发布了一个单张神经网络,它可以预测其他神经网络的质量 ImageNet 参数。通过使用预测参数初始化,我们能够提高在 PyTorch 中可用的多种 ImageNet 模型的训练效果。如果将模型转移到其他数据集上,初始化使用预测参数的模型也更快地收敛并达到竞争的终极表现。
URL
https://arxiv.org/abs/2303.04143