Abstract
One-shot method is a powerful Neural Architecture Search (NAS) framework, but its training is non-trivial and it is difficult to achieve competitive results on large scale datasets like ImageNet. In this work, we propose a Single Path One-Shot model to address its main challenge in the training. Our central idea is to construct a simplified supernet, Single Path Supernet, which is trained by an uniform path sampling method. All underlying architectures (and their weights) get trained fully and equally. Once we have a trained supernet, we apply an evolutionary algorithm to efficiently search the best-performing architectures without any fine tuning. Comprehensive experiments verify that our approach is flexible and effective. It is easy to train and fast to search. It effortlessly supports complex search spaces (e.g., building blocks, channel, mixed-precision quantization) and different search constraints (e.g., FLOPs, latency). It is thus convenient to use for various needs. It achieves start-of-the-art performance on the large dataset ImageNet.
Abstract (translated)
单步搜索法是一种强大的神经结构搜索(NAS)框架,但它的训练并不简单,在像ImageNet这样的大规模数据集上很难获得有竞争力的结果。在这项工作中,我们提出了一个单程单发模式来解决它在训练中的主要挑战。我们的核心思想是构造一个简化的超新星,单程超新星,它是由一个统一的路径采样方法训练的。所有底层体系结构(及其权重)都得到了全面而平等的培训。一旦我们拥有了一个训练有素的超新星,我们就可以应用进化算法,在不进行任何微调的情况下,高效地搜索性能最佳的架构。综合实验验证了该方法的灵活性和有效性。它很容易训练,搜索速度也很快。它毫不费力地支持复杂的搜索空间(例如,构建块、通道、混合精度量化)和不同的搜索约束(例如,触发器、延迟)。因此,可以方便地满足各种需要。它在大数据集ImageNet上实现了艺术的开始。
URL
https://arxiv.org/abs/1904.00420