Abstract
High sensitivity of neural architecture search (NAS) methods against their input such as step-size (i.e., learning rate) and search space prevents practitioners from applying them out-of-the-box to their own problems, albeit its purpose is to automate a part of tuning process. Aiming at a fast, robust, and widely-applicable NAS, we develop a generic optimization framework for NAS. We turn a coupled optimization of connection weights and neural architecture into a differentiable optimization by means of stochastic relaxation. It accepts arbitrary search space (widely-applicable) and enables to employ a gradient-based simultaneous optimization of weights and architecture (fast). We propose a stochastic natural gradient method with an adaptive step-size mechanism built upon our theoretical investigation (robust). Despite its simplicity and no problem-dependent parameter tuning, our method exhibited near state-of-the-art performances with low computational budgets both on image classification and inpainting tasks.
Abstract (translated)
神经架构搜索(NAS)方法对其输入(如步进大小(即学习率)和搜索空间)的高度敏感会阻止从业者将其应用到他们自己的问题中,尽管其目的是自动化调整过程的一部分。针对一个快速、健壮、广泛适用的NAS,我们开发了一个通用的NAS优化框架。利用随机松弛方法,将连接权值与神经网络结构的耦合优化转化为可微优化。它接受任意的搜索空间(广泛适用),并能够采用基于梯度的权值和架构同时优化(快速)。在理论研究的基础上,提出了一种具有自适应步长机制的随机自然梯度方法(鲁棒性)。尽管该方法简单,参数调整无问题,但在图像分类和修复任务方面的计算预算较低,显示出接近最先进的性能。
URL
https://arxiv.org/abs/1905.08537