Abstract
The algorithms of one-shot neural architecture search (NAS) have been widely used to reduce the computation. However, because of the interference among the subnets which weights are shared, the subnets inherited from these super-net trained by those algorithms have poor consistency in precision ranking. To address this problem, we propose a step-by-step training super-net scheme from one-shot NAS to few-shot NAS. In the training scheme, we training super-net by the one-shot way firstly, and then we disentangles the weights of super-net by splitting that to multi-subnets and training them gradually. Finally, our method ranks 4th place in the CVPR2022 Lightweight NAS Challenge Track1. Our code is available at this https URL.
Abstract (translated)
URL
https://arxiv.org/abs/2206.05896