Abstract
Neural Architecture Search is a costly practice. The fact that a search space can span a vast number of design choices with each architecture evaluation taking nontrivial overhead makes it hard for an algorithm to sufficiently explore candidate networks. In this paper, we propose AutoBuild, a scheme which learns to align the latent embeddings of operations and architecture modules with the ground-truth performance of the architectures they appear in. By doing so, AutoBuild is capable of assigning interpretable importance scores to architecture modules, such as individual operation features and larger macro operation sequences such that high-performance neural networks can be constructed without any need for search. Through experiments performed on state-of-the-art image classification, segmentation, and Stable Diffusion models, we show that by mining a relatively small set of evaluated architectures, AutoBuild can learn to build high-quality architectures directly or help to reduce search space to focus on relevant areas, finding better architectures that outperform both the original labeled ones and ones found by search baselines. Code available at this https URL
Abstract (translated)
Neural Architecture Search是一种昂贵的行为。事实证明,每次评估架构时,搜索空间都可以扩展到包含许多设计选择,每个评估都需要付出非 trivial 的开销,这使得算法很难充分探索候选网络。在本文中,我们提出了AutoBuild,一种学习如何将操作和架构模块的潜在表示与它们出现的架构的地面真实性能对齐的方案。通过这样做,AutoBuild能够为架构模块分配可解释的重要性分数,例如单个操作特征和较大的宏观操作序列,从而无需进行搜索即可构建高性能的神经网络。通过在先进的图像分类、分割和Stable Diffusion模型上进行实验,我们证明了,通过挖掘相对较小的评估架构,AutoBuild可以学会直接构建高质量的建筑,或者帮助缩小搜索空间,专注于相关领域,找到优于原始标签的和搜索基线的更好架构。代码在此处 https:// 这个链接 。
URL
https://arxiv.org/abs/2403.13293