Abstract
Large Language Models (LLMs) have achieved remarkable success across diverse tasks, yet they remain vulnerable to adversarial attacks, notably the well-documented \textit{jailbreak} attack. Recently, the Greedy Coordinate Gradient (GCG) attack has demonstrated efficacy in exploiting this vulnerability by optimizing adversarial prompts through a combination of gradient heuristics and greedy search. However, the efficiency of this attack has become a bottleneck in the attacking process. To mitigate this limitation, in this paper we rethink the generation of adversarial prompts through an optimization lens, aiming to stabilize the optimization process and harness more heuristic insights from previous iterations. Specifically, we introduce the \textbf{M}omentum \textbf{A}ccelerated G\textbf{C}G (\textbf{MAC}) attack, which incorporates a momentum term into the gradient heuristic. Experimental results showcase the notable enhancement achieved by MAP in gradient-based attacks on aligned language models. Our code is available at this https URL.
Abstract (translated)
大语言模型(LLMs)在各种任务上取得了显著的成功,然而它们仍然容易受到对抗攻击,尤其是著名的 \textit{jailbreak} 攻击。最近, Greedy Coordinate Gradient (GCG) 攻击通过优化对抗提示并通过梯度启发式和贪心搜索相结合,成功地利用了这一漏洞。然而,这种攻击的效率在攻击过程中成为了一个瓶颈。为了减轻这一限制,本文通过优化视角重新思考了生成对抗提示的过程,旨在稳定优化过程并从之前的迭代中获得更多的启发式洞察。具体来说,我们引入了 \textbf{M}omentum \textbf{A}ccelerated G\textbf{C}G (\textbf{MAC}) 攻击,该攻击在梯度启发式上的优化中引入了动量项。实验结果展示了 MAP 在基于梯度的攻击对对齐语言模型上的显著增强。我们的代码可在此处访问:https:// this URL.
URL
https://arxiv.org/abs/2405.01229