Abstract
Neural architecture search (NAS) has made tremendous progress in the automatic design of effective neural network structures but suffers from a heavy computational burden. One-shot NAS significantly alleviates the burden through weight sharing and improves computational efficiency. Zero-shot NAS further reduces the cost by predicting the performance of the network from its initial state, which conducts no training. Both methods aim to distinguish between "good" and "bad" architectures, i.e., ranking consistency of predicted and true performance. In this paper, we propose Ranking Distillation one-shot NAS (RD-NAS) to enhance ranking consistency, which utilizes zero-cost proxies as the cheap teacher and adopts the margin ranking loss to distill the ranking knowledge. Specifically, we propose a margin subnet sampler to distill the ranking knowledge from zero-shot NAS to one-shot NAS by introducing Group distance as margin. Our evaluation of the NAS-Bench-201 and ResNet-based search space demonstrates that RD-NAS achieve 10.7\% and 9.65\% improvements in ranking ability, respectively. Our codes are available at this https URL
Abstract (translated)
神经网络架构搜索(NAS)在自动设计有效神经网络结构方面取得了巨大的进展,但仍然存在巨大的计算负担。一次搜索的NAS(One-ShotNAS)通过共享权重来显著减轻负担并提高计算效率。零搜索的NAS进一步减少了成本,通过预测网络的初始状态的性能,而无需进行训练。两种方法都旨在区分“好”和“坏”架构,即预测和真实性能排名的一致性。在本文中,我们提出了排名蒸馏一次搜索NAS(RD-NAS)来增强排名一致性,该方法利用零成本代理作为便宜的教师,并采用边际排名损失来蒸馏排名知识。具体而言,我们提出了一种边际子网络采样器,以从零搜索NAS到一次搜索NAS的方式引入群体距离作为边际,我们的评估结果表明,RD-NAS分别实现了10.7%和9.65%的排名能力提高。我们的代码在此https URL上可用。
URL
https://arxiv.org/abs/2301.09850