Abstract
Generative techniques for image anonymization have great potential to generate datasets that protect the privacy of those depicted in the images, while achieving high data fidelity and utility. Existing methods have focused extensively on preserving facial attributes, but failed to embrace a more comprehensive perspective that considers the scene and background into the anonymization process. This paper presents, to the best of our knowledge, the first approach to image anonymization based on Latent Diffusion Models (LDMs). Every element of a scene is maintained to convey the same meaning, yet manipulated in a way that makes re-identification difficult. We propose two LDMs for this purpose: CAMOUFLaGE-Base exploits a combination of pre-trained ControlNets, and a new controlling mechanism designed to increase the distance between the real and anonymized images. CAMOFULaGE-Light is based on the Adapter technique, coupled with an encoding designed to efficiently represent the attributes of different persons in a scene. The former solution achieves superior performance on most metrics and benchmarks, while the latter cuts the inference time in half at the cost of fine-tuning a lightweight module. We show through extensive experimental comparison that the proposed method is competitive with the state-of-the-art concerning identity obfuscation whilst better preserving the original content of the image and tackling unresolved challenges that current solutions fail to address.
Abstract (translated)
图像匿名技术的生成方法具有很大的潜力,生成保护图像中人物隐私的 dataset,同时实现高数据保真度和利用率。现有的方法主要关注保留面部特征,但未能从更全面的视角考虑场景和背景在匿名处理过程中的影响。本文介绍了基于潜在扩散模型(LDMs)的图像匿名化方法。我们知识范围内,这是第一种基于潜在扩散模型的图像匿名化方法。对于此目的,我们提出了两个 LDMs:CAMOUFLaGE-Base 利用了预训练的控制网络,并设计了一个新的控制机制,以增加真实和匿名图像之间的距离。CAMOFULaGE-Light 基于 Adapter 技术,附加了一个编码,旨在有效地表示场景中不同人物的属性。前一个解决方案在大多数指标和基准测试中实现了卓越的性能,而后者在牺牲轻量级模块的微调来减半推理时间的同时,将推理时间进一步减少了一半。我们通过广泛的实验比较证明了,与最先进的身份隐藏解决方案相比,所提出的具有竞争力的方法在保留图像原始内容的同时,更好地解决了当前解决方案未能解决的问题。
URL
https://arxiv.org/abs/2403.14790