Abstract
Modern large language models (LLMs) have a significant amount of world knowledge, which enables strong performance in commonsense reasoning and knowledge-intensive tasks when harnessed properly. The language model can also learn social biases, which has a significant potential for societal harm. There have been many mitigation strategies proposed for LLM safety, but it is unclear how effective they are for eliminating social biases. In this work, we propose a new methodology for attacking language models with knowledge graph augmented generation. We refactor natural language stereotypes into a knowledge graph, and use adversarial attacking strategies to induce biased responses from several open- and closed-source language models. We find our method increases bias in all models, even those trained with safety guardrails. This demonstrates the need for further research in AI safety, and further work in this new adversarial space.
Abstract (translated)
现代大型语言模型(LLMs)具有大量的世界知识,在恰当的利用下,在常识推理和知识密集型任务中表现出强大的性能。语言模型还可以学习社会偏见,这有可能对社会造成严重伤害。为解决LLM的安全性问题,已经提出了许多缓解策略,但目前尚不清楚它们是否对消除社会偏见有效。在本文中,我们提出了一个新的方法来攻击知识图增强生成语言模型。我们将自然语言刻板印象重构为知识图,并使用对抗攻击策略促使多个开源和闭源语言模型产生有偏的响应。我们发现,我们的方法在所有模型上都增加了偏见,即使是经过安全网保护的模型也不例外。这表明需要进一步研究AI安全问题,并进一步探索这个新的对抗领域。
URL
https://arxiv.org/abs/2405.04756