Abstract
Recent explorations with commercial Large Language Models (LLMs) have shown that non-expert users can jailbreak LLMs by simply manipulating the prompts; resulting in degenerate output behavior, privacy and security breaches, offensive outputs, and violations of content regulator policies. Limited formal studies have been carried out to formalize and analyze these attacks and their mitigations. We bridge this gap by proposing a formalism and a taxonomy of known (and possible) jailbreaks. We perform a survey of existing jailbreak methods and their effectiveness on open-source and commercial LLMs (such as GPT 3.5, OPT, BLOOM, and FLAN-T5-xxl). We further propose a limited set of prompt guards and discuss their effectiveness against known attack types.
Abstract (translated)
最近与商业大型语言模型(LLMs)的探险表明,非专家用户可以通过简单地操纵提示来破解LLMs,导致退化的输出行为、隐私和安全漏洞、进攻性输出以及违反内容监管政策。有限的正式研究已经尝试过 formalize 和 analyze 这些攻击及其缓解方法。我们提出了一个形式化和分类方案来解决这些问题,我们对现有的破解方法和它们在开源和商业LLMs(如GPT 3.5、OPT、BLOOM和FLAN-T5-xxl)上的效力进行了调查,并进一步提出了一组提示保护方案,并讨论它们对抗已知攻击类型的有效性。
URL
https://arxiv.org/abs/2305.14965