Paper Reading AI Learner

BootPIG: Bootstrapping Zero-shot Personalized Image Generation Capabilities in Pretrained Diffusion Models

2024-01-25 06:18:20
Senthil Purushwalkam, Akash Gokul, Shafiq Joty, Nikhil Naik


Recent text-to-image generation models have demonstrated incredible success in generating images that faithfully follow input prompts. However, the requirement of using words to describe a desired concept provides limited control over the appearance of the generated concepts. In this work, we address this shortcoming by proposing an approach to enable personalization capabilities in existing text-to-image diffusion models. We propose a novel architecture (BootPIG) that allows a user to provide reference images of an object in order to guide the appearance of a concept in the generated images. The proposed BootPIG architecture makes minimal modifications to a pretrained text-to-image diffusion model and utilizes a separate UNet model to steer the generations toward the desired appearance. We introduce a training procedure that allows us to bootstrap personalization capabilities in the BootPIG architecture using data generated from pretrained text-to-image models, LLM chat agents, and image segmentation models. In contrast to existing methods that require several days of pretraining, the BootPIG architecture can be trained in approximately 1 hour. Experiments on the DreamBooth dataset demonstrate that BootPIG outperforms existing zero-shot methods while being comparable with test-time finetuning approaches. Through a user study, we validate the preference for BootPIG generations over existing methods both in maintaining fidelity to the reference object's appearance and aligning with textual prompts.

Abstract (translated)

近年来,基于文本到图像生成的模型已经在生成符合输入提示的图像方面取得了巨大的成功。然而,使用词语描述所需的概念仅提供了对生成概念外观的有限控制。在这项工作中,我们通过提出一种方法来解决这一缺陷,从而在现有的文本到图像扩散模型中实现个性化功能。我们提出了一个名为(BootPIG)的新架构,允许用户提供参考图像,以指导生成图像中概念的外观。所提出的BootPIG架构对预训练的文本到图像扩散模型进行了最小修改,并利用了一个单独的UNet模型来引导生成向所需的 appearance 方向发展。我们引入了一种训练程序,使得我们能够通过预训练的文本到图像模型、LLM聊天机器人和图像分割模型产生的数据来引导BootPIG架构的个人化功能。与现有的方法需要数天预训练相比,BootPIG架构可以在大约1小时的训练时间内进行训练。在DreamBooth数据集上的实验表明,BootPIG在保持对参考对象外观的忠实度以及与文本提示对齐的同时,超越了现有的零散方法。通过用户研究,我们验证了BootPIG生成器在保持对参考对象外观的忠实度以及与文本提示对齐方面的优势。



3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot