Paper Reading AI Learner

TrojanTime: Backdoor Attacks on Time Series Classification

2025-02-02 03:24:24
Chang Dong, Zechao Sun, Guangdong Bai, Shuying Piao, Weitong Chen, Wei Emma Zhang

Abstract

Time Series Classification (TSC) is highly vulnerable to backdoor attacks, posing significant security threats. Existing methods primarily focus on data poisoning during the training phase, designing sophisticated triggers to improve stealthiness and attack success rate (ASR). However, in practical scenarios, attackers often face restrictions in accessing training data. Moreover, it is a challenge for the model to maintain generalization ability on clean test data while remaining vulnerable to poisoned inputs when data is inaccessible. To address these challenges, we propose TrojanTime, a novel two-step training algorithm. In the first stage, we generate a pseudo-dataset using an external arbitrary dataset through target adversarial attacks. The clean model is then continually trained on this pseudo-dataset and its poisoned version. To ensure generalization ability, the second stage employs a carefully designed training strategy, combining logits alignment and batch norm freezing. We evaluate TrojanTime using five types of triggers across four TSC architectures in UCR benchmark datasets from diverse domains. The results demonstrate the effectiveness of TrojanTime in executing backdoor attacks while maintaining clean accuracy. Finally, to mitigate this threat, we propose a defensive unlearning strategy that effectively reduces the ASR while preserving clean accuracy.

Abstract (translated)

时间序列分类(TSC)高度易受后门攻击的影响,从而带来了严重的安全威胁。现有的方法主要集中在训练阶段的数据投毒上,设计复杂的触发器以提高隐蔽性和攻击成功率(ASR)。然而,在实际场景中,攻击者往往面临无法访问训练数据的限制。此外,当数据不可用时,模型在保持对干净测试数据泛化能力的同时仍对被污染输入保持脆弱性也是一项挑战。 为了解决这些难题,我们提出了一种名为TrojanTime的新颖两阶段训练算法。第一阶段使用外部任意数据集通过目标对抗攻击生成伪数据集,然后在这个伪数据集及其被投毒版本上持续训练干净模型。为了确保泛化能力,在第二阶段采用精心设计的训练策略,结合对数项对齐和批量归一化冻结技术。 我们利用UCR基准数据集中四种不同的TSC架构和五种触发类型评估了TrojanTime的效果。实验结果表明,TrojanTime在保持干净准确率的同时能够有效地执行后门攻击。最后,为了缓解这种威胁,我们提出了一种防御性卸载策略,该策略可以在不损害干净准确性的前提下有效降低ASR。

URL

https://arxiv.org/abs/2502.00646

PDF

https://arxiv.org/pdf/2502.00646.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot