Abstract
Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt. While revolutionary, current state-of-the-art diffusion models may still fail in generating images that fully convey the semantics in the given text prompt. We analyze the publicly available Stable Diffusion model and assess the existence of catastrophic neglect, where the model fails to generate one or more of the subjects from the input prompt. Moreover, we find that in some cases the model also fails to correctly bind attributes (e.g., colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of Generative Semantic Nursing (GSN), where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. Using an attention-based formulation of GSN, dubbed Attend-and-Excite, we guide the model to refine the cross-attention units to attend to all subject tokens in the text prompt and strengthen - or excite - their activations, encouraging the model to generate all subjects described in the text prompt. We compare our approach to alternative approaches and demonstrate that it conveys the desired concepts more faithfully across a range of text prompts.
Abstract (translated)
最近,文本生成图像模型表现出无与伦比的能力,通过目标文本 prompt 生成多样、创造性的图像。虽然这具有革命性,但当前最先进的扩散模型仍然可能无法完全传达给定文本 prompt 的语义。我们对公开可用的稳定扩散模型进行分析,并评估是否存在灾难性的忽略,即模型无法从输入 prompt 中生成一个或多个主题。此外,我们还发现,在某些情况下,模型也未能正确将属性(如颜色)绑定到相应的主题上。为了缓解这些失败情况,我们引入了生成语义护理(GSN)的概念,其中我们在推断时间 fly 干预生成过程,以提高生成的图像的逼真度。使用被称为Attend-and-Excite 的注意力基于框架,我们指导模型优化交叉注意力单元,关注文本 prompt 中的所有主题元符,并加强或兴奋它们的激活,鼓励模型生成文本 prompt 中描述的所有主题。我们比较了我们的方法和替代方法,并证明它在不同文本prompt 中更准确地传递了所需的概念。
URL
https://arxiv.org/abs/2301.13826