Abstract
Diffusion models have made significant advances in text-guided synthesis tasks. However, editing user-provided images remains challenging, as the high dimensional noise input space of diffusion models is not naturally suited for image inversion or spatial editing. In this work, we propose an image representation that promotes spatial editing of input images using a diffusion model. Concretely, we learn to encode an input into "image elements" that can faithfully reconstruct an input image. These elements can be intuitively edited by a user, and are decoded by a diffusion model into realistic images. We show the effectiveness of our representation on various image editing tasks, such as object resizing, rearrangement, dragging, de-occlusion, removal, variation, and image composition. Project page: this https URL
Abstract (translated)
扩散模型在文本引导的合成任务中取得了显著的进展。然而,编辑用户提供的图像仍然具有挑战性,因为扩散模型的高维噪声输入空间并不自然适合图像反演或空间编辑。在这项工作中,我们提出了一种使用扩散模型促进输入图像空间编辑的图像表示。具体来说,我们学会了将输入编码成“图像元素”,这些元素可以忠实地重构输入图像。这些元素可以直观地编辑用户,并由扩散模型解码为逼真的图像。我们在各种图像编辑任务上展示了我们表示的有效性,包括对象缩放、重新排列、拖动、消除、去除、变化和图像组合。页面链接:这是 https:// this URL。
URL
https://arxiv.org/abs/2404.16029