Paper Reading AI Learner

Approximating DTW with a convolutional neural network on EEG data

2023-01-30 13:27:47
Hugo Lerogeron, Romain Picot-Clemente, Alain Rakotomamonjy, Laurent Heutte

Abstract

Dynamic Time Wrapping (DTW) is a widely used algorithm for measuring similarities between two time series. It is especially valuable in a wide variety of applications, such as clustering, anomaly detection, classification, or video segmentation, where the time-series have different timescales, are irregularly sampled, or are shifted. However, it is not prone to be considered as a loss function in an end-to-end learning framework because of its non-differentiability and its quadratic temporal complexity. While differentiable variants of DTW have been introduced by the community, they still present some drawbacks: computing the distance is still expensive and this similarity tends to blur some differences in the time-series. In this paper, we propose a fast and differentiable approximation of DTW by comparing two architectures: the first one for learning an embedding in which the Euclidean distance mimics the DTW, and the second one for directly predicting the DTW output using regression. We build the former by training a siamese neural network to regress the DTW value between two time-series. Depending on the nature of the activation function, this approximation naturally supports differentiation, and it is efficient to compute. We show, in a time-series retrieval context on EEG datasets, that our methods achieve at least the same level of accuracy as other DTW main approximations with higher computational efficiency. We also show that it can be used to learn in an end-to-end setting on long time series by proposing generative models of EEGs.

Abstract (translated)

动态时间加权(DTW)是一种广泛应用的算法,用于测量两个时间序列之间的相似性。它在多种应用中特别有价值,例如聚类、异常检测、分类或视频分割,其中时间序列具有不同尺度、不规则采样或移动。然而,由于它的不定性及其对时间复杂性的quadratic阶跃性质,它在端到端学习框架中不容易被视为损失函数。虽然社区已经引入了可区分的DTW变体,但它们仍然存在一些缺点:计算距离仍然昂贵,这种相似性往往会模糊时间序列中的一些差异。在本文中,我们提出了一种快速且可区分的近似方法,通过比较两个架构:第一个用于学习嵌入,其中欧几里得距离模拟了DTW,第二个用于使用回归直接预测DTW输出。我们构建第一个方法通过训练一个三组神经元网络来回归两个时间序列之间的DTW值。根据激活函数的性质,这种近似自然支持 differentiation,计算效率很高。我们在EEG数据集上的时间序列检索上下文中展示了,我们的方法至少可以达到与其他DTW主要近似方法相同的精度,而且计算效率更高。我们还展示了,可以使用EEG的生成模型来在长期时间序列上学习,通过提出EEG生成模型。

URL

https://arxiv.org/abs/2301.12873

PDF

https://arxiv.org/pdf/2301.12873.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot