Abstract
Recent text-to-image generation models have demonstrated incredible success in generating images that faithfully follow input prompts. However, the requirement of using words to describe a desired concept provides limited control over the appearance of the generated concepts. In this work, we address this shortcoming by proposing an approach to enable personalization capabilities in existing text-to-image diffusion models. We propose a novel architecture (BootPIG) that allows a user to provide reference images of an object in order to guide the appearance of a concept in the generated images. The proposed BootPIG architecture makes minimal modifications to a pretrained text-to-image diffusion model and utilizes a separate UNet model to steer the generations toward the desired appearance. We introduce a training procedure that allows us to bootstrap personalization capabilities in the BootPIG architecture using data generated from pretrained text-to-image models, LLM chat agents, and image segmentation models. In contrast to existing methods that require several days of pretraining, the BootPIG architecture can be trained in approximately 1 hour. Experiments on the DreamBooth dataset demonstrate that BootPIG outperforms existing zero-shot methods while being comparable with test-time finetuning approaches. Through a user study, we validate the preference for BootPIG generations over existing methods both in maintaining fidelity to the reference object's appearance and aligning with textual prompts.
Abstract (translated)
近年来,基于文本到图像生成的模型已经在生成符合输入提示的图像方面取得了巨大的成功。然而,使用词语描述所需的概念仅提供了对生成概念外观的有限控制。在这项工作中,我们通过提出一种方法来解决这一缺陷,从而在现有的文本到图像扩散模型中实现个性化功能。我们提出了一个名为(BootPIG)的新架构,允许用户提供参考图像,以指导生成图像中概念的外观。所提出的BootPIG架构对预训练的文本到图像扩散模型进行了最小修改,并利用了一个单独的UNet模型来引导生成向所需的 appearance 方向发展。我们引入了一种训练程序,使得我们能够通过预训练的文本到图像模型、LLM聊天机器人和图像分割模型产生的数据来引导BootPIG架构的个人化功能。与现有的方法需要数天预训练相比,BootPIG架构可以在大约1小时的训练时间内进行训练。在DreamBooth数据集上的实验表明,BootPIG在保持对参考对象外观的忠实度以及与文本提示对齐的同时,超越了现有的零散方法。通过用户研究,我们验证了BootPIG生成器在保持对参考对象外观的忠实度以及与文本提示对齐方面的优势。
URL
https://arxiv.org/abs/2401.13974