Abstract
The boundless possibility of neural networks which can be used to solve a problem -- each with different performance -- leads to a situation where a Deep Learning expert is required to identify the best neural network. This goes against the hope of removing the need for experts. Neural Architecture Search (NAS) offers a solution to this by automatically identifying the best architecture. However, to date, NAS work has focused on a small set of datasets which we argue are not representative of real-world problems. We introduce eight new datasets created for a series of NAS Challenges: AddNIST, Language, MultNIST, CIFARTile, Gutenberg, Isabella, GeoClassing, and Chesseract. These datasets and challenges are developed to direct attention to issues in NAS development and to encourage authors to consider how their models will perform on datasets unknown to them at development time. We present experimentation using standard Deep Learning methods as well as the best results from challenge participants.
Abstract (translated)
翻译:具有无限可能的神经网络用于解决问题--每个网络具有不同的性能--导致了一个情况,即需要一位深度学习专家来确定最佳的神经网络。这违背了消除专家的期望。神经架构搜索(NAS)通过自动确定最佳架构解决了这个问题。然而,到目前为止,NAS工作集中于我们认为是代表现实世界问题的少量数据集。我们引入了八个为NAS挑战系列创建的新数据集:AddNIST,语言,MultNIST,CIFARTile,Gutenberg,Isabella,GeoClassing,和Chesseract。这些数据集和挑战是为了引导对NAS发展的关注,并鼓励作者在开发时考虑他们模型在未知的数据集上的表现。我们展示了使用标准深度学习方法以及挑战参与者的最佳结果的实验。
URL
https://arxiv.org/abs/2404.02189