Paper Reading AI Learner

Boosting Order-Preserving and Transferability for Neural Architecture Search: a Joint Architecture Refined Search and Fine-tuning Approach

2024-03-18 00:13:41
Beichen Zhang, Xiaoxing Wang, Xiaohan Qin, Junchi Yan
     

Abstract

Supernet is a core component in many recent Neural Architecture Search (NAS) methods. It not only helps embody the search space but also provides a (relative) estimation of the final performance of candidate architectures. Thus, it is critical that the top architectures ranked by a supernet should be consistent with those ranked by true performance, which is known as the order-preserving ability. In this work, we analyze the order-preserving ability on the whole search space (global) and a sub-space of top architectures (local), and empirically show that the local order-preserving for current two-stage NAS methods still need to be improved. To rectify this, we propose a novel concept of Supernet Shifting, a refined search strategy combining architecture searching with supernet fine-tuning. Specifically, apart from evaluating, the training loss is also accumulated in searching and the supernet is updated every iteration. Since superior architectures are sampled more frequently in evolutionary searching, the supernet is encouraged to focus on top architectures, thus improving local order-preserving. Besides, a pre-trained supernet is often un-reusable for one-shot methods. We show that Supernet Shifting can fulfill transferring supernet to a new dataset. Specifically, the last classifier layer will be unset and trained through evolutionary searching. Comprehensive experiments show that our method has better order-preserving ability and can find a dominating architecture. Moreover, the pre-trained supernet can be easily transferred into a new dataset with no loss of performance.

Abstract (translated)

Supernet是一种在许多最近的新神经架构搜索(NAS)方法中作为核心组件的组件。它不仅有助于表示搜索空间,还提供了对最终候选架构的(相对)估计。因此,在超网络排名最高的设计师中,排名应该与真正的性能相一致,这被称为顺序保持能力。在这项工作中,我们分析了整个搜索空间(全局)和顶级架构子空间(局部)的顺序保持能力,并经验性地表明,当前的两阶段NAS方法中的局部顺序保持能力仍需要改进。为了纠正这个问题,我们提出了一个新的概念——超网络平移,一种将架构搜索与超网络微调相结合的优化搜索策略。具体来说,除了评估外,训练损失也在搜索过程中累积,并在每个迭代中更新超网络。由于在进化搜索中优质架构的采样更频繁,因此超网络被鼓励专注于顶级架构,从而提高局部顺序保持能力。此外,预训练的超网络通常无法用于一次性的方法。我们证明了超网络平移可以将预训练的超网络转移到新的数据集。具体来说,最后一个分类层将设置为空并通过对进化搜索进行训练来更新。全面的实验表明,我们的方法具有更好的顺序保持能力,可以找到主导架构。此外,预训练的超网络可以很容易地将转移到新的数据集,而不会损失性能。

URL

https://arxiv.org/abs/2403.11380

PDF

https://arxiv.org/pdf/2403.11380.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot