Paper Reading AI Learner

DACAD: Domain Adaptation Contrastive Learning for Anomaly Detection in Multivariate Time Series

2024-04-17 11:20:14
Zahra Zamanzadeh Darban, Geoffrey I. Webb, Mahsa Salehi

Abstract

Time series anomaly detection (TAD) faces a significant challenge due to the scarcity of labelled data, which hinders the development of accurate detection models. Unsupervised domain adaptation (UDA) addresses this challenge by leveraging a labelled dataset from a related domain to detect anomalies in a target dataset. Existing domain adaptation techniques assume that the number of anomalous classes does not change between the source and target domains. In this paper, we propose a novel Domain Adaptation Contrastive learning for Anomaly Detection in multivariate time series (DACAD) model to address this issue by combining UDA and contrastive representation learning. DACAD's approach includes an anomaly injection mechanism that introduces various types of synthetic anomalies, enhancing the model's ability to generalise across unseen anomalous classes in different domains. This method significantly broadens the model's adaptability and robustness. Additionally, we propose a supervised contrastive loss for the source domain and a self-supervised contrastive triplet loss for the target domain, improving comprehensive feature representation learning and extraction of domain-invariant features. Finally, an effective Centre-based Entropy Classifier (CEC) is proposed specifically for anomaly detection, facilitating accurate learning of normal boundaries in the source domain. Our extensive evaluation across multiple real-world datasets against leading models in time series anomaly detection and UDA underscores DACAD's effectiveness. The results validate DACAD's superiority in transferring knowledge across domains and its potential to mitigate the challenge of limited labelled data in time series anomaly detection.

Abstract (translated)

时间序列异常检测(TAD)面临一个重大挑战,因为稀疏的标注数据,这阻碍了准确检测模型的开发。无监督域适应(UDA)通过利用相关领域有标注的数据集中的标注数据来检测目标数据集中的异常,从而解决了这个挑战。现有的域适应技术假设源域和目标域中的异常类数量不变。在本文中,我们提出了一个名为多维时间序列异常检测(DACAD)的新域适应异常检测模型,通过结合UDA和对比表示学习来解决这个挑战。DACAD的方法包括异常注入机制,引入各种类型的合成异常,增强了模型在不同领域的未见异常类上的泛化能力。这种方法显著拓宽了模型的适应性和稳健性。此外,我们提出了针对源域的监督对比损失和针对目标域的自监督对比三元组损失,提高了全面特征表示学习和提取域间特征的能力。最后,我们提出了一个特定的基于中心的熵分类器(CEC)用于异常检测,从而准确地学习源域中的正常边界。我们对多个现实世界数据集进行广泛的评估,与时间序列异常检测领域的领先模型进行对比,证明了DACAD的有效性。结果证实了DACAD在跨领域传递知识和减轻时间序列异常检测领域有限标注数据方面的优越性。

URL

https://arxiv.org/abs/2404.11269

PDF

https://arxiv.org/pdf/2404.11269.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot