Abstract
Improving search efficiency serves as one of the crucial objectives of Neural Architecture Search (NAS). However, many current approaches ignore the universality of the search strategy and fail to reduce the computational redundancy during the search process, especially in one-shot NAS architectures. Besides, current NAS methods show invalid reparameterization in non-linear search space, leading to poor efficiency in common search spaces like DARTS. In this paper, we propose TopoNAS, a model-agnostic approach for gradient-based one-shot NAS that significantly reduces searching time and memory usage by topological simplification of searchable paths. Firstly, we model the non-linearity in search spaces to reveal the parameterization difficulties. To improve the search efficiency, we present a topological simplification method and iteratively apply module-sharing strategies to simplify the topological structure of searchable paths. In addition, a kernel normalization technique is also proposed to preserve the search accuracy. Experimental results on the NASBench201 benchmark with various search spaces demonstrate the effectiveness of our method. It proves the proposed TopoNAS enhances the performance of various architectures in terms of search efficiency while maintaining a high level of accuracy. The project page is available at this https URL.
Abstract (translated)
提高搜索效率是神经架构搜索(NAS)的一个关键目标。然而,许多现有方法忽略了搜索策略的普遍性,并且在搜索过程中没有减少计算冗余,尤其是在一无所获的NAS架构中。此外,当前的NAS方法在非线性搜索空间中表现出了无效的参数化,导致在普通搜索空间(如DARTS)的效率较低。在本文中,我们提出了TopoNAS,一种模型无关的基于梯度的one-shot NAS,通过对搜索路径的拓扑简化显著减少了搜索时间和内存使用量。首先,我们通过建模搜索空间的非线性性揭示了参数化难度。为了提高搜索效率,我们提出了一个拓扑简化方法,并迭代应用模块共享策略来简化搜索路径的拓扑结构。此外,还提出了一种内核归一化技术来保留搜索精度。在NASBench201基准测试中,各种搜索空间对TopoNAS方法进行了实验,证明了我们的方法的有效性。它证明了TopoNAS在提高各种架构的搜索效率的同时,保持了高水平的准确性。项目页面可以在该https URL上找到。
URL
https://arxiv.org/abs/2408.01311