Paper Reading AI Learner

Taming Encoder for Zero Fine-tuning Image Customization with Text-to-Image Diffusion Models

2023-04-05 17:59:32
Xuhui Jia, Yang Zhao, Kelvin C.K. Chan, Yandong Li, Han Zhang, Boqing Gong, Tingbo Hou, Huisheng Wang, Yu-Chuan Su

Abstract

This paper proposes a method for generating images of customized objects specified by users. The method is based on a general framework that bypasses the lengthy optimization required by previous approaches, which often employ a per-object optimization paradigm. Our framework adopts an encoder to capture high-level identifiable semantics of objects, producing an object-specific embedding with only a single feed-forward pass. The acquired object embedding is then passed to a text-to-image synthesis model for subsequent generation. To effectively blend a object-aware embedding space into a well developed text-to-image model under the same generation context, we investigate different network designs and training strategies, and propose a simple yet effective regularized joint training scheme with an object identity preservation loss. Additionally, we propose a caption generation scheme that become a critical piece in fostering object specific embedding faithfully reflected into the generation process, while keeping control and editing abilities. Once trained, the network is able to produce diverse content and styles, conditioned on both texts and objects. We demonstrate through experiments that our proposed method is able to synthesize images with compelling output quality, appearance diversity, and object fidelity, without the need of test-time optimization. Systematic studies are also conducted to analyze our models, providing insights for future work.

Abstract (translated)

本论文提出了一种方法,用于生成由用户指定的定制化对象的图像。这种方法基于一个通用框架,绕过了以往方法需要漫长优化的步骤,通常采用针对每个对象的优化范式。我们的框架采用编码器来捕捉对象的高度可识别语义,产生只有一个forward pass的特定对象嵌入。获取的特定对象嵌入后传递给文本到图像合成模型进行后续生成。为了在相同的生成上下文中有效地将对象意识嵌入空间与一个不断发展的文本到图像模型融合,我们研究不同的网络设计和训练策略,并提出了一个简单的但有效的 regularized Joint TrainingScheme,同时提出了一种标题生成方案,成为促进对象特定嵌入准确反映生成过程的关键部分,同时保持控制和编辑能力。一旦训练完成,网络能够产生基于文本和对象的多种内容和风格,我们通过实验证明了我们的提议方法能够生成具有令人信服的输出质量、外观多样性和对象逼真度的图像,无需测试时间优化。系统研究还分析了我们的模型,为未来工作提供了见解。

URL

https://arxiv.org/abs/2304.02642

PDF

https://arxiv.org/pdf/2304.02642.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot