Abstract
We propose AutoGrow to automate depth discovery in Deep Neural Networks (DNNs): starting from a shallow seed architecture, AutoGrow grows new layers if the growth improves the accuracy; otherwise, the growth stops and the network depth is discovered. The residual and plain blocks are used as growing sub-modules to study DNNs with and without shortcuts. We propose generic growing and stopping policies to minimize human efforts spent on the optimal depth search. Our experiments show that by applying the same policy to different tasks, AutoGrow can always discover network depth effectively and achieve state-of-the-art accuracy on various datasets of MNIST, FashionMNIST, SVHN, CIFAR10, CIFAR100 and ImageNet. Comparing to Neural Architecture Search (NAS) that often designs a gigantic search space and consumes tremendous resources, AutoGrow lies at the other end of the research spectrum: it focuses on efficient depth discovery and reduces the growing and searching time to a level similar to that of training a single DNN. Thus, AutoGrow is able to scale up to large datasets such as ImageNet. Our study also reveals that previous Network Morphism is sub-optimal for increasing layer depth. Finally, we demonstrate that AutoGrow enables the training of deeper plain networks, which has been problematic even using Batch Normalization.
Abstract (translated)
我们提出了在深度神经网络(dnns)中自动增长以自动发现深度:从浅种子结构开始,如果增长提高了精度,则自动增长将生成新的层;否则,增长将停止并发现网络深度。剩余块和素块作为生长的子模块,研究具有或不具有快捷方式的dnn。我们提出了一般的增长和停止政策,以尽量减少人力投入的最佳深度搜索。我们的实验表明,通过对不同的任务应用相同的策略,Autogrow总能有效地发现网络深度,并在mnist、fashionmist、svhn、cifar10、cifar100和imagenet的各种数据集上实现最先进的精度。与通常设计巨大搜索空间并消耗大量资源的神经架构搜索(NAS)相比,Autogrow位于研究范围的另一端:它注重有效的深度发现,并将增长和搜索时间缩短到与训练单个dnn类似的水平。因此,Autogrow能够扩展到大型数据集,如ImageNet。我们的研究还表明,以前的网络形态对于增加层深是次优的。最后,我们证明了Autogrow可以对更深层的平面网络进行培训,即使使用批处理规范化也存在问题。
URL
https://arxiv.org/abs/1906.02909