Abstract
Recent work has developed optimization procedures to find token sequences, called adversarial triggers, which can elicit unsafe responses from aligned language models. These triggers are believed to be universally transferable, i.e., a trigger optimized on one model can jailbreak other models. In this paper, we concretely show that such adversarial triggers are not universal. We extensively investigate trigger transfer amongst 13 open models and observe inconsistent transfer. Our experiments further reveal a significant difference in robustness to adversarial triggers between models Aligned by Preference Optimization (APO) and models Aligned by Fine-Tuning (AFT). We find that APO models are extremely hard to jailbreak even when the trigger is optimized directly on the model. On the other hand, while AFT models may appear safe on the surface, exhibiting refusals to a range of unsafe instructions, we show that they are highly susceptible to adversarial triggers. Lastly, we observe that most triggers optimized on AFT models also generalize to new unsafe instructions from five diverse domains, further emphasizing their vulnerability. Overall, our work highlights the need for more comprehensive safety evaluations for aligned language models.
Abstract (translated)
近年来,研究者们开发了寻找令牌序列的优化方法,称为对抗性触发器,这些触发器可以从对齐的语言模型中引起不安全的反应。这些触发器被认为具有普遍可转移性,即在一种模型上优化的触发器可以解锁其他模型。在本文中,我们明确地证明了这种普遍可转移的对抗性触发器并不存在。我们深入研究了13个开源模型之间的触发器传递,并观察到不一致的传递。我们的实验进一步揭示了使用偏好优化(APO)模型和 Fine-Tuning(FT)模型对 adversarial 触发器的鲁棒性差异。我们发现,即使 APO 模型直接优化触发器,也很难被破解。另一方面,虽然 AFT 模型在表面上看起来非常安全,对各种不安全的指令表现出拒绝,但我们发现它们对 adversarial 触发器非常敏感。最后,我们观察到,大多数在 AFT 模型上优化的触发器也适用于来自五个不同领域的全新不安全指令,这进一步突显了它们的脆弱性。总体而言,我们的工作强调了对于对齐语言模型的更全面的安全性评估的必要性。
URL
https://arxiv.org/abs/2404.16020