Abstract
The advent of large language models (LLMs) has revolutionized the field of natural language processing, yet they might be attacked to produce harmful content. Despite efforts to ethically align LLMs, these are often fragile and can be circumvented by jailbreaking attacks through optimized or manual adversarial prompts. To address this, we introduce the Information Bottleneck Protector (IBProtector), a defense mechanism grounded in the information bottleneck principle, and we modify the objective to avoid trivial solutions. The IBProtector selectively compresses and perturbs prompts, facilitated by a lightweight and trainable extractor, preserving only essential information for the target LLMs to respond with the expected answer. Moreover, we further consider a situation where the gradient is not visible to be compatible with any LLM. Our empirical evaluations show that IBProtector outperforms current defense methods in mitigating jailbreak attempts, without overly affecting response quality or inference speed. Its effectiveness and adaptability across various attack methods and target LLMs underscore the potential of IBProtector as a novel, transferable defense that bolsters the security of LLMs without requiring modifications to the underlying models.
Abstract (translated)
大语言模型的出现已经颠覆了自然语言处理领域,然而它们可能遭到攻击以产生有害的内容。尽管努力使LLMs具有伦理意义,但它们通常很脆弱,可以通过利用优化或手动攻击提示绕过。为了应对这个问题,我们引入了信息瓶颈保护器(IBProtector),这是一种基于信息瓶颈原理的防御机制,并修改了目标,以避免琐碎的解决方案。IBProtector选择性地压缩和扰动提示,得益于轻量级且可训练的提取器,保留仅对目标LLM具有关键信息的 essential information。此外,我们进一步考虑了一种情况,即梯度不可见,以兼容任何LLM。我们的实证评估结果表明,IBProtector在减轻攻击尝试方面优于现有防御方法,而不会过分影响响应质量和推理速度。它在不同攻击方法和目标LLM上的有效性和可适应性表明,IBProtector作为一种新颖、可转移的防御,有助于加强LLM的安全性,而无需对底层模型进行修改。
URL
https://arxiv.org/abs/2404.13968