Abstract
The success of neural networks has driven a shift in focus from feature engineering to architecture engineering. However, successful networks today are constructed using a small and manually defined set of building blocks. Even in methods of neural architecture search (NAS) the network connectivity patterns are largely constrained. In this work we propose a method for discovering neural wirings. We relax the typical notion of layers and instead enable channels to form connections independent of each other. This allows for a much larger space of possible networks. The wiring of our network is not fixed during training -- as we learn the network parameters we also learn the structure itself. Our experiments demonstrate that our learned connectivity outperforms hand engineered and randomly wired networks. By learning the connectivity of MobileNetV1 [9] we boost the ImageNet accuracy by 10% at ~41M FLOPs. Moreover, we show that our method generalizes to recurrent and continuous time networks.
Abstract (translated)
神经网络的成功推动了焦点从特征工程转向建筑工程。然而,今天成功的网络是用一组小的手工定义的构建块构建的。即使在神经结构搜索(NAS)方法中,网络连接模式也受到很大的限制。在这项工作中,我们提出了一种发现神经接线的方法。我们放松了层的典型概念,而使通道能够形成相互独立的连接。这就为可能的网络提供了更大的空间。在培训过程中,我们的网络连接并不是固定的——因为我们学习了网络参数,所以我们也学习了结构本身。我们的实验表明,我们学习到的连接性优于手工设计的和随机连接的网络。通过学习mobilenetv1[9]的连接性,我们可以在大约41m的浮点运算下将ImageNet的精度提高10%。此外,我们证明了我们的方法推广到循环和连续时间网络。
URL
https://arxiv.org/abs/1906.00586