Abstract
Prompt leakage in large language models (LLMs) poses a significant security and privacy threat, particularly in retrieval-augmented generation (RAG) systems. However, leakage in multi-turn LLM interactions along with mitigation strategies has not been studied in a standardized manner. This paper investigates LLM vulnerabilities against prompt leakage across 4 diverse domains and 10 closed- and open-source LLMs. Our unique multi-turn threat model leverages the LLM's sycophancy effect and our analysis dissects task instruction and knowledge leakage in the LLM response. In a multi-turn setting, our threat model elevates the average attack success rate (ASR) to 86.2%, including a 99% leakage with GPT-4 and claude-1.3. We find that some black-box LLMs like Gemini show variable susceptibility to leakage across domains - they are more likely to leak contextual knowledge in the news domain compared to the medical domain. Our experiments measure specific effects of 6 black-box defense strategies, including a query-rewriter in the RAG scenario. Our proposed multi-tier combination of defenses still has an ASR of 5.3% for black-box LLMs, indicating room for enhancement and future direction for LLM security research.
Abstract (translated)
大规模语言模型(LLMs)中的提示泄露对安全和隐私构成重大威胁,尤其是在检索增强生成(RAG)系统中。然而,在多轮LLM交互中,以及缓解策略,对提示泄露的研究还没有以标准化方式进行。本文研究了4个不同领域和10个开源LLM和闭源LLM对提示泄露的漏洞。我们独特的多轮威胁模型利用了LLM的协同效应,并分析了LLM响应中的任务指令和知识泄露。在多轮设置中,我们的威胁模型将平均攻击成功率(ASR)提高至86.2%,包括GPT-4和claude-1.3的99%泄漏。我们发现,一些黑盒LLM,如Gemini,在领域之间表现出不同的泄漏倾向 - 他们在新闻领域比医疗领域更容易泄露上下文知识。我们的实验测量了6个黑盒防御策略的具体效果,包括在RAG场景中的查询重写器。我们提出的多层防御组合对黑盒LLM的ASR为5.3%,表明还有提高的空间和未来LLM安全研究的发展方向。
URL
https://arxiv.org/abs/2404.16251