Abstract
Text-to-image (T2I) diffusion models have shown exceptional capabilities in generating images that closely correspond to textual prompts. However, the advancement of T2I diffusion models presents significant risks, as the models could be exploited for malicious purposes, such as generating images with violence or nudity, or creating unauthorized portraits of public figures in inappropriate contexts. To mitigate these risks, concept removal methods have been proposed. These methods aim to modify diffusion models to prevent the generation of malicious and unwanted concepts. Despite these efforts, existing research faces several challenges: (1) a lack of consistent comparisons on a comprehensive dataset, (2) ineffective prompts in harmful and nudity concepts, (3) overlooked evaluation of the ability to generate the benign part within prompts containing malicious concepts. To address these gaps, we propose to benchmark the concept removal methods by introducing a new dataset, Six-CD, along with a novel evaluation metric. In this benchmark, we conduct a thorough evaluation of concept removals, with the experimental observations and discussions offering valuable insights in the field.
Abstract (translated)
文本到图像(T2I)扩散模型在生成与文本提示非常相似的图像方面表现出色。然而,T2I扩散模型的进步也带来了显著的风险,因为这些模型可能被用于恶意目的,例如生成具有暴力和不雅内容的图像,或以不当的方式创建名人的未经授权的照片。为了减轻这些风险,提出了概念去除方法。这些方法旨在修改扩散模型,以防止生成恶意和不受欢迎的概念。尽管这些努力,现有研究仍面临几个挑战:(1)在全面数据集上缺乏一致的比较,(2)在有害和低俗概念上的提示效果不佳,(3)在包含恶意概念的提示中忽视了生成良性部分的能力评估。为了填补这些空白,我们提出了一个新的数据集Six-CD和新评价指标,对概念去除方法进行基准测试。在基准测试中,我们详细评估了概念去除效果,实验观察和讨论为该领域提供了宝贵的见解。
URL
https://arxiv.org/abs/2406.14855