Abstract
Neural architecture search automates the design of neural network architectures usually by exploring a large and thus complex architecture search space. To advance the architecture search, we present a graph diffusion-based NAS approach that uses discrete conditional graph diffusion processes to generate high-performing neural network architectures. We then propose a multi-conditioned classifier-free guidance approach applied to graph diffusion networks to jointly impose constraints such as high accuracy and low hardware latency. Unlike the related work, our method is completely differentiable and requires only a single model training. In our evaluations, we show promising results on six standard benchmarks, yielding novel and unique architectures at a fast speed, i.e. less than 0.2 seconds per architecture. Furthermore, we demonstrate the generalisability and efficiency of our method through experiments on ImageNet dataset.
Abstract (translated)
神经架构搜索通过探索大型且因此复杂的架构搜索空间,自动设计神经网络架构。为了提高架构搜索,我们提出了一个基于离散条件图扩散过程的NAS方法,该方法使用离散条件图扩散过程生成高性能的神经网络架构。然后,我们提出了一种应用于图扩散网络的多条件分类器-无关指导方法,以共同约束高准确性和低硬件延迟。与相关研究不同,我们的方法完全不同可导,只需要一个模型训练。在我们的评估中,我们在六个标准基准上展示了良好的结果,从而在不到0.2秒的时间内产生了新的和独特的架构。此外,我们通过在ImageNet数据集上的实验证明了我们的方法的泛化能力和效率。
URL
https://arxiv.org/abs/2403.06020