Abstract
Image diffusion models such as DALL-E 2, Imagen, and Stable Diffusion have attracted significant attention due to their ability to generate high-quality synthetic images. In this work, we show that diffusion models memorize individual images from their training data and emit them at generation time. With a generate-and-filter pipeline, we extract over a thousand training examples from state-of-the-art models, ranging from photographs of individual people to trademarked company logos. We also train hundreds of diffusion models in various settings to analyze how different modeling and data decisions affect privacy. Overall, our results show that diffusion models are much less private than prior generative models such as GANs, and that mitigating these vulnerabilities may require new advances in privacy-preserving training.
Abstract (translated)
图像扩散模型如DALL-E 2、Imagen和稳定扩散等因其生成高质量合成图像的能力而吸引了广泛关注。在这项工作中,我们展示了扩散模型可以从训练数据中记忆个体图像并在生成时将其输出。通过生成-筛选管道,我们从最先进的模型中获取了超过一千的训练示例,涵盖了个人照片和注册商标的公司标志等各种类型的图像。我们还在不同环境下训练了数百个扩散模型,以分析不同建模和数据决策对隐私的影响。总的来说,我们的结果表明,扩散模型比GAN等先前的生成模型更加不隐私,并可能需要更多的隐私保护训练进展来克服这些漏洞。
URL
https://arxiv.org/abs/2301.13188