Paper Reading AI Learner

MixAT: Combining Continuous and Discrete Adversarial Training for LLMs

2025-05-22 17:32:50
Csaba D\'ek\'any, Stefan Balauca, Robin Staab, Dimitar I. Dimitrov, Martin Vechev

Abstract

Despite recent efforts in Large Language Models (LLMs) safety and alignment, current adversarial attacks on frontier LLMs are still able to force harmful generations consistently. Although adversarial training has been widely studied and shown to significantly improve the robustness of traditional machine learning models, its strengths and weaknesses in the context of LLMs are less understood. Specifically, while existing discrete adversarial attacks are effective at producing harmful content, training LLMs with concrete adversarial prompts is often computationally expensive, leading to reliance on continuous relaxations. As these relaxations do not correspond to discrete input tokens, such latent training methods often leave models vulnerable to a diverse set of discrete attacks. In this work, we aim to bridge this gap by introducing MixAT, a novel method that combines stronger discrete and faster continuous attacks during training. We rigorously evaluate MixAT across a wide spectrum of state-of-the-art attacks, proposing the At Least One Attack Success Rate (ALO-ASR) metric to capture the worst-case vulnerability of models. We show MixAT achieves substantially better robustness (ALO-ASR < 20%) compared to prior defenses (ALO-ASR > 50%), while maintaining a runtime comparable to methods based on continuous relaxations. We further analyze MixAT in realistic deployment settings, exploring how chat templates, quantization, low-rank adapters, and temperature affect both adversarial training and evaluation, revealing additional blind spots in current methodologies. Our results demonstrate that MixAT's discrete-continuous defense offers a principled and superior robustness-accuracy tradeoff with minimal computational overhead, highlighting its promise for building safer LLMs. We provide our code and models at this https URL.

Abstract (translated)

尽管在大型语言模型(LLM)的安全性和对齐方面近期做出了不少努力,但目前针对前沿LLM的对抗性攻击仍然能够持续生成有害内容。虽然对抗训练已被广泛研究,并被证明可以显著提高传统机器学习模型的鲁棒性,但在LLM上下文中其优势和局限性却不太为人所知。具体而言,尽管现有的离散对抗性攻击在产生有害内容方面效果很好,但使用具体的对抗提示来训练LLM通常计算成本高昂,导致依赖于连续松弛方法。由于这些松弛方法并不对应于离散输入标记,这样的潜在训练方式常常使模型易受一系列不同的离散攻击。 为了解决这一问题,我们提出了MixAT(混合对抗训练),这是一种结合了更强的离散和更快的连续攻击的新颖方法,在训练过程中加以应用。我们在广泛的前沿攻击上严格评估了MixAT,并提出了一种At Least One Attack Success Rate (ALO-ASR)指标来捕捉模型在最坏情况下的脆弱性。我们展示了MixAT达到了显著更好的鲁棒性(ALO-ASR小于20%),相较于之前的防御措施(ALO-ASR大于50%)而言,同时保持了与基于连续松弛的方法相似的运行时间。此外,我们在真实的部署场景下进一步分析了MixAT的表现,探讨了聊天模板、量化、低秩适配器和温度等因素如何影响对抗训练及评估,揭示了当前方法论中的更多盲点。 我们的结果表明,MixAT通过离散-连续防御提供了一个原则性的、更优的鲁棒性与准确性权衡,并且计算开销极小。这进一步强调了其在构建安全LLM方面的潜力。我们在[此处提供代码和模型](https://this.is.the.url.for.code.and.models/)。

URL

https://arxiv.org/abs/2505.16947

PDF

https://arxiv.org/pdf/2505.16947.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot