Abstract
Text-to-image synthesis has become highly popular for generating realistic and stylized images, often requiring fine-tuning generative models with domain-specific datasets for specialized tasks. However, these valuable datasets face risks of unauthorized usage and unapproved sharing, compromising the rights of the owners. In this paper, we address the issue of dataset abuse during the fine-tuning of Stable Diffusion models for text-to-image synthesis. We present a dataset watermarking framework designed to detect unauthorized usage and trace data leaks. The framework employs two key strategies across multiple watermarking schemes and is effective for large-scale dataset authorization. Extensive experiments demonstrate the framework's effectiveness, minimal impact on the dataset (only 2% of the data required to be modified for high detection accuracy), and ability to trace data leaks. Our results also highlight the robustness and transferability of the framework, proving its practical applicability in detecting dataset abuse.
Abstract (translated)
文本到图像合成已成为生成真实和风格化图像的高效方法,通常需要对特定任务对领域特定数据进行微调。然而,这些有价值的数据集面临未经授权的使用和未经批准共享的风险,从而侵犯了所有者的权利。在本文中,我们解决了在微调稳定扩散模型进行文本到图像合成时数据集滥用的問題。我們提出了一个数据水印框架,用于检测未经授权的使用并追踪数据泄漏。该框架采用多种水印方案,对于大规模数据授权非常有效。丰富的实验结果证明了框架的有效性,对数据集的修改量很小(仅占高检测准确度所需数据的2%),以及追踪数据泄漏的能力。我们的结果还强调了框架的稳健性和可转移性,证明其在检测数据滥用方面的实际应用价值。
URL
https://arxiv.org/abs/2409.18897