Abstract
Neural architecture search (NAS), or automated design of neural network models, remains a very challenging meta-learning problem. Several recent works (called "one-shot" approaches) have focused on dramatically reducing NAS running time by leveraging proxy models that still provide architectures with competitive performance. In our work, we propose a new meta-learning algorithm that we call CoNAS, or Compressive sensing-based Neural Architecture Search. Our approach merges ideas from one-shot approaches with iterative techniques for learning low-degree sparse Boolean polynomial functions. We validate our approach on several standard test datasets, discover novel architectures hitherto unreported, and achieve competitive (or better) results in both performance and search time compared to existing NAS approaches. Further, we support our algorithm with a theoretical analysis, providing upper bounds on the number of measurements needed to perform reliable meta-learning; to our knowledge, these analysis tools are novel to the NAS literature and may be of independent interest.
Abstract (translated)
神经架构搜索(NAS)或神经网络模型的自动化设计仍然是一个非常具有挑战性的元学习问题。最近的几项工作(称为“一次性”方法)都集中在通过利用仍然提供具有竞争性性能的体系结构的代理模型显著减少NAS运行时间上。在我们的工作中,我们提出了一种新的元学习算法,我们称之为Conas,或基于压缩传感的神经架构搜索。我们的方法将一次性方法的思想与学习低阶稀疏布尔多项式函数的迭代技术结合起来。我们在多个标准测试数据集上验证了我们的方法,发现了迄今为止尚未报告的新架构,并在性能和搜索时间方面取得了与现有NAS方法相比具有竞争力(或更好)的结果。此外,我们通过理论分析来支持我们的算法,为执行可靠的元学习所需的测量数量提供上限;据我们所知,这些分析工具对于NAS文献来说是新颖的,可能具有独立的兴趣。
URL
https://arxiv.org/abs/1906.02869