Abstract
In this work, we study the task of sketch-guided image inpainting. Unlike the well-explored natural language-guided image inpainting, which excels in capturing semantic details, the relatively less-studied sketch-guided inpainting offers greater user control in specifying the object's shape and pose to be inpainted. As one of the early solutions to this task, we introduce a novel partial discrete diffusion process (PDDP). The forward pass of the PDDP corrupts the masked regions of the image and the backward pass reconstructs these masked regions conditioned on hand-drawn sketches using our proposed sketch-guided bi-directional transformer. The proposed novel transformer module accepts two inputs -- the image containing the masked region to be inpainted and the query sketch to model the reverse diffusion process. This strategy effectively addresses the domain gap between sketches and natural images, thereby, enhancing the quality of inpainting results. In the absence of a large-scale dataset specific to this task, we synthesize a dataset from the MS-COCO to train and extensively evaluate our proposed framework against various competent approaches in the literature. The qualitative and quantitative results and user studies establish that the proposed method inpaints realistic objects that fit the context in terms of the visual appearance of the provided sketch. To aid further research, we have made our code publicly available at this https URL .
Abstract (translated)
在这项工作中,我们研究了基于草图指导的图像修复任务。与在自然语言引导下图像修复任务中已经取得很好进展的情况不同,相对较少研究的基于草图指导的修复任务为用户指定修复物体形状和姿势提供了更大的用户控制。作为解决这个问题的一种早期解决方案,我们引入了一个新的部分离散扩散过程(PDDP)。PDDP的前向传播会破坏图像上的遮罩区域,而反向传播根据我们提出的草图指导双向变换器重构这些遮罩区域。所提出的新的变换器模块接受两个输入——包含要修复的遮罩区域的图像和查询草图,以建模反向扩散过程。这种策略有效地解决了草图和自然图像之间的领域差距,从而提高了修复结果的质量。在缺乏针对这个任务的较大规模数据集的情况下,我们通过将MS-COCO中的数据集合成一个数据集来训练,并详细评估我们提出的框架与各种有效方法之间的差异。定性和定量的结果以及用户研究证实了所提出的修复方法在给定草图的视觉外观下修复了真实的物体。为了进一步研究,我们将代码公开发布在https:// 这个网址上。
URL
https://arxiv.org/abs/2404.11949