Paper Reading AI Learner

Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models

2023-01-31 18:10:38
Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, Daniel Cohen-Or

Abstract

Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt. While revolutionary, current state-of-the-art diffusion models may still fail in generating images that fully convey the semantics in the given text prompt. We analyze the publicly available Stable Diffusion model and assess the existence of catastrophic neglect, where the model fails to generate one or more of the subjects from the input prompt. Moreover, we find that in some cases the model also fails to correctly bind attributes (e.g., colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of Generative Semantic Nursing (GSN), where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. Using an attention-based formulation of GSN, dubbed Attend-and-Excite, we guide the model to refine the cross-attention units to attend to all subject tokens in the text prompt and strengthen - or excite - their activations, encouraging the model to generate all subjects described in the text prompt. We compare our approach to alternative approaches and demonstrate that it conveys the desired concepts more faithfully across a range of text prompts.

Abstract (translated)

最近,文本生成图像模型表现出无与伦比的能力,通过目标文本 prompt 生成多样、创造性的图像。虽然这具有革命性,但当前最先进的扩散模型仍然可能无法完全传达给定文本 prompt 的语义。我们对公开可用的稳定扩散模型进行分析,并评估是否存在灾难性的忽略,即模型无法从输入 prompt 中生成一个或多个主题。此外,我们还发现,在某些情况下,模型也未能正确将属性(如颜色)绑定到相应的主题上。为了缓解这些失败情况,我们引入了生成语义护理(GSN)的概念,其中我们在推断时间 fly 干预生成过程,以提高生成的图像的逼真度。使用被称为Attend-and-Excite 的注意力基于框架,我们指导模型优化交叉注意力单元,关注文本 prompt 中的所有主题元符,并加强或兴奋它们的激活,鼓励模型生成文本 prompt 中描述的所有主题。我们比较了我们的方法和替代方法,并证明它在不同文本prompt 中更准确地传递了所需的概念。

URL

https://arxiv.org/abs/2301.13826

PDF

https://arxiv.org/pdf/2301.13826.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot