Abstract
Most of the previous approaches to Time Series Classification (TSC) highlight the significance of receptive fields and frequencies while overlooking the time resolution. Hence, unavoidably suffered from scalability issues as they integrated an extensive range of receptive fields into classification models. Other methods, while having a better adaptation for large datasets, require manual design and yet not being able to reach the optimal architecture due to the uniqueness of each dataset. We overcome these challenges by proposing a novel multi-scale search space and a framework for Neural architecture search (NAS), which addresses both the problem of frequency and time resolution, discovering the suitable scale for a specific dataset. We further show that our model can serve as a backbone to employ a powerful Transformer module with both untrained and pre-trained weights. Our search space reaches the state-of-the-art performance on four datasets on four different domains while introducing more than ten highly fine-tuned models for each data.
Abstract (translated)
大多数以前的时间序列分类方法(TSC)强调了感官领域和频率的重要性,而忽视了时间分辨率。因此,由于将广泛的感官领域集成到分类模型中,导致规模问题。其他方法,虽然在大型数据集上表现更好,但由于每个数据集的独特性,需要手动设计,并且无法达到最优架构。我们通过提出一种新颖的多尺度搜索空间和神经架构搜索(NAS)框架来解决这些问题,该框架解决了频率和时间分辨率的问题,并发现了适合特定数据集的合适尺度。我们还进一步展示了我们的模型可以作为使用强大Transformer模块的强大后盾,无论是否使用预训练权重。我们的搜索空间在四个不同的数据集和四个领域上达到了最先进的性能,同时为每个数据集添加了超过10个高度微调的模型。
URL
https://arxiv.org/abs/2402.13822