Abstract
The implications of backdoor attacks on English-centric large language models (LLMs) have been widely examined - such attacks can be achieved by embedding malicious behaviors during training and activated under specific conditions that trigger malicious outputs. However, the impact of backdoor attacks on multilingual models remains under-explored. Our research focuses on cross-lingual backdoor attacks against multilingual LLMs, particularly investigating how poisoning the instruction-tuning data in one or two languages can affect the outputs in languages whose instruction-tuning data was not poisoned. Despite its simplicity, our empirical analysis reveals that our method exhibits remarkable efficacy in models like mT5, BLOOM, and GPT-3.5-turbo, with high attack success rates, surpassing 95% in several languages across various scenarios. Alarmingly, our findings also indicate that larger models show increased susceptibility to transferable cross-lingual backdoor attacks, which also applies to LLMs predominantly pre-trained on English data, such as Llama2, Llama3, and Gemma. Moreover, our experiments show that triggers can still work even after paraphrasing, and the backdoor mechanism proves highly effective in cross-lingual response settings across 25 languages, achieving an average attack success rate of 50%. Our study aims to highlight the vulnerabilities and significant security risks present in current multilingual LLMs, underscoring the emergent need for targeted security measures.
Abstract (translated)
针对英语中心的大型语言模型(LLMs)的後门攻击的潜在影响已经得到了广泛的探讨。这些攻击可以通过在训练过程中嵌入恶意行为并激活特定条件来实现。然而,对多语言模型的後门攻击影响的研究仍然较少。我们的研究聚焦于针对多语言LLMs的跨语言後门攻击,特别是研究在某些情况下,通过恶意地污染一个或两个语言的指令微调数据,如何影响未被污染的指令微调数据的输出语言。尽管我们的方法非常简单,但我们的实证分析揭示了它在像mT5、BLOOM和GPT-3.5-turbo等模型上表现出非凡的攻击效果,具有高的攻击成功率,在各种情景下的成功率超过了95%。我们的研究还发现,越大型的模型对可转移的跨语言後门攻击的易感性也越强,这也适用于主要在英语数据上预训练的LLM,如Llama2、Llama3和Gemma。此外,我们的实验还表明,即使进行同义词替换,触发器仍然可以起作用,跨语言响应设置中的后门机制在25种语言上取得了高度有效的攻击成功率,平均攻击成功率为50%。我们的研究旨在强调当前多语言LLM中存在的漏洞和显著的安全风险,强调需要采取针对性的安全措施。
URL
https://arxiv.org/abs/2404.19597