Abstract
Diffusion models are widely used for image editing tasks. Existing editing methods often design a representation manipulation procedure by curating an edit direction in the text embedding or score space. However, such a procedure faces a key challenge: overestimating the edit strength harms visual consistency while underestimating it fails the editing task. Notably, each source image may require a different editing strength, and it is costly to search for an appropriate strength via trial-and-error. To address this challenge, we propose Concept Lancet (CoLan), a zero-shot plug-and-play framework for principled representation manipulation in diffusion-based image editing. At inference time, we decompose the source input in the latent (text embedding or diffusion score) space as a sparse linear combination of the representations of the collected visual concepts. This allows us to accurately estimate the presence of concepts in each image, which informs the edit. Based on the editing task (replace/add/remove), we perform a customized concept transplant process to impose the corresponding editing direction. To sufficiently model the concept space, we curate a conceptual representation dataset, CoLan-150K, which contains diverse descriptions and scenarios of visual terms and phrases for the latent dictionary. Experiments on multiple diffusion-based image editing baselines show that methods equipped with CoLan achieve state-of-the-art performance in editing effectiveness and consistency preservation.
Abstract (translated)
扩散模型在图像编辑任务中被广泛应用。现有的编辑方法通常通过设计文本嵌入或评分空间中的编辑方向来实现表示操作过程,然而这样的做法面临一个重要挑战:过度估计编辑强度会损害视觉一致性,而低估它则无法完成编辑任务。值得注意的是,每个源图像可能需要不同的编辑强度,并且通过试错法搜索合适的强度代价高昂。为了解决这一挑战,我们提出了Concept Lancet(CoLan),这是一种零样本的即插即用框架,用于基于扩散模型的图像编辑中的原理性表示操作。在推理阶段,我们将源输入在潜在空间(文本嵌入或扩散评分)中分解为收集到的视觉概念表示的稀疏线性组合。这使得我们可以精确估计每个图像中存在的概念,从而指导编辑过程。根据编辑任务(替换、添加或删除),我们执行定制的概念移植流程以施加相应的编辑方向。为了充分建模概念空间,我们策划了一个包含多样化描述和场景的视觉术语和短语的数据集CoLan-150K,该数据集构成了潜在字典的一部分。在多个基于扩散模型的图像编辑基线上的实验表明,使用了CoLan的方法在编辑有效性和一致性保持方面取得了当前最佳性能。
URL
https://arxiv.org/abs/2504.02828