Abstract
Diffusion models have demonstrated their capability to synthesize high-quality and diverse images from textual prompts. However, simultaneous control over both global contexts (e.g., object layouts and interactions) and local details (e.g., colors and emotions) still remains a significant challenge. The models often fail to understand complex descriptions involving multiple objects and reflect specified visual attributes to wrong targets or ignore them. This paper presents Global-Local Diffusion (\textit{GLoD}), a novel framework which allows simultaneous control over the global contexts and the local details in text-to-image generation without requiring training or fine-tuning. It assigns multiple global and local prompts to corresponding layers and composes their noises to guide a denoising process using pre-trained diffusion models. Our framework enables complex global-local compositions, conditioning objects in the global prompt with the local prompts while preserving other unspecified identities. Our quantitative and qualitative evaluations demonstrate that GLoD effectively generates complex images that adhere to both user-provided object interactions and object details.
Abstract (translated)
扩散模型已经证明了它们从文本提示中合成高质量和多样图像的能力。然而,同时控制全局上下文(例如物体布局和交互)和局部细节(例如颜色和情感)仍然是一个重要的挑战。模型通常无法理解涉及多个物体的复杂描述,并将指定的视觉属性错误地应用于错误的目标或忽略它们。本文提出了一种名为全局-局部扩散(GLoD)的新框架,允许在文本到图像生成中同时控制全局上下文和局部细节,而无需进行训练或微调。它将多个全局和局部提示分配给相应的层,并将它们的噪声组合起来,使用预训练的扩散模型进行去噪处理。我们的框架能够实现复杂的全局-局部组合,通过局部提示保留全局提示,同时保留其他未指定身份的物体。我们的定量和定性评估显示,GLoD有效地生成了符合用户提供的物体交互和物体细节的复杂图像。
URL
https://arxiv.org/abs/2404.15447