Abstract
Diffusion models have demonstrated excellent capabilities in text-to-image generation. Their semantic understanding (i.e., prompt following) ability has also been greatly improved with large language models (e.g., T5, Llama). However, existing models cannot perfectly handle long and complex text prompts, especially when the text prompts contain various objects with numerous attributes and interrelated spatial relationships. While many regional prompting methods have been proposed for UNet-based models (SD1.5, SDXL), but there are still no implementations based on the recent Diffusion Transformer (DiT) architecture, such as SD3 and this http URL this report, we propose and implement regional prompting for FLUX.1 based on attention manipulation, which enables DiT with fined-grained compositional text-to-image generation capability in a training-free manner. Code is available at this https URL.
Abstract (translated)
扩散模型在文本到图像生成方面表现出色。它们的语义理解能力(即跟随提示的能力)也随着大型语言模型(如T5和Llama)的发展而得到了显著提升。然而,现有的模型无法完美处理长且复杂的文本提示,特别是在这些提示包含多个具有众多属性及相互关联的空间关系的对象时。尽管为基于UNet的模型(如SD1.5、SDXL)提出了许多区域提示方法,但目前还没有基于最近的扩散Transformer(DiT)架构(例如SD3)的相关实现。在本报告中,我们提出并实现了针对FLUX.1的区域提示机制,该机制通过注意力操作使DiT具备了无需训练即可生成精细组合文本到图像的能力。代码可在以下链接获取:[此https URL]。 注意:原文中的“this http URL”和“this https URL”应替换为实际的有效URL地址。
URL
https://arxiv.org/abs/2411.02395