Abstract
Large Language Models (LLMs) are powerful tools with profound societal impacts, yet their ability to generate responses to diverse and uncontrolled inputs leaves them vulnerable to adversarial attacks. While existing defenses often struggle to generalize across varying attack types, recent advancements in representation engineering offer promising alternatives. In this work, we propose a defense framework that formulates model defense as a contrastive representation learning (CRL) problem. Our method finetunes a model using a triplet-based loss combined with adversarial hard negative mining to encourage separation between benign and harmful representations. Our experimental results across multiple models demonstrate that our approach outperforms prior representation engineering-based defenses, improving robustness against both input-level and embedding-space attacks without compromising standard performance. Our code is available at this https URL
Abstract (translated)
大型语言模型(LLMs)是具有深远社会影响的强大工具,但它们生成多样且不受控输入响应的能力也使它们容易受到对抗性攻击。尽管现有的防御方法往往难以在各种攻击类型中泛化,最近在表示工程方面的进展提供了一些有希望的替代方案。在这项工作中,我们提出了一种以对比表示学习(CRL)问题的形式来构建模型防御框架的方法。我们的方法通过使用基于三元组的损失结合对抗性难例挖掘技术对模型进行微调,鼓励良性与有害表示之间的分离。在多个模型上的实验结果表明,相较于之前基于表示工程的防御方法,我们提出的方法提高了抵御输入级和嵌入空间攻击的鲁棒性,并且没有牺牲标准性能。 我们的代码可在[此处](https://this https URL)获取。
URL
https://arxiv.org/abs/2506.11938