Paper Reading AI Learner

$P+$: Extended Textual Conditioning in Text-to-Image Generation

2023-03-16 17:38:15
Andrey Voynov, Qinghao Chu, Daniel Cohen-Or, Kfir Aberman

Abstract

We introduce an Extended Textual Conditioning space in text-to-image models, referred to as $P+$. This space consists of multiple textual conditions, derived from per-layer prompts, each corresponding to a layer of the denoising U-net of the diffusion model. We show that the extended space provides greater disentangling and control over image synthesis. We further introduce Extended Textual Inversion (XTI), where the images are inverted into $P+$, and represented by per-layer tokens. We show that XTI is more expressive and precise, and converges faster than the original Textual Inversion (TI) space. The extended inversion method does not involve any noticeable trade-off between reconstruction and editability and induces more regular inversions. We conduct a series of extensive experiments to analyze and understand the properties of the new space, and to showcase the effectiveness of our method for personalizing text-to-image models. Furthermore, we utilize the unique properties of this space to achieve previously unattainable results in object-style mixing using text-to-image models. Project page: this https URL

Abstract (translated)

我们引入了在文本到图像模型中扩展的文本 conditioning 空间,称为 $P+$。这个空间由每个层prompt的多个文本条件组成,每个对应扩散模型中 U-net 的一层。我们证明,扩展空间提供了更有效地分离和控制图像合成的功能。我们还引入了扩展的文本翻转(XTI),其中图像被翻转为 $P+$,并使用每个层代币代表。我们证明,XTI更加表达且精确,比原始的文本翻转空间(TI)收敛得更快。扩展翻转方法没有注意到重建和编辑之间的明显权衡,并导致更稳定的翻转。我们进行了一系列的广泛实验来分析和理解新空间的性质,并展示我们的方法如何个性化文本到图像模型。此外,我们利用这个空间的独特性质,在使用文本到图像模型的对象风格混合中实现了以前无法达到的结果。项目页面: this https URL

URL

https://arxiv.org/abs/2303.09522

PDF

https://arxiv.org/pdf/2303.09522


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot