Abstract
Existing text-to-image diffusion models primarily generate images from text prompts. However, the inherent conciseness of textual descriptions poses challenges in faithfully synthesizing images with intricate details, such as specific entities or scenes. This paper presents \textbf{UNIMO-G}, a simple multimodal conditional diffusion framework that operates on multimodal prompts with interleaved textual and visual inputs, which demonstrates a unified ability for both text-driven and subject-driven image generation. UNIMO-G comprises two core components: a Multimodal Large Language Model (MLLM) for encoding multimodal prompts, and a conditional denoising diffusion network for generating images based on the encoded multimodal input. We leverage a two-stage training strategy to effectively train the framework: firstly pre-training on large-scale text-image pairs to develop conditional image generation capabilities, and then instruction tuning with multimodal prompts to achieve unified image generation proficiency. A well-designed data processing pipeline involving language grounding and image segmentation is employed to construct multi-modal prompts. UNIMO-G excels in both text-to-image generation and zero-shot subject-driven synthesis, and is notably effective in generating high-fidelity images from complex multimodal prompts involving multiple image entities.
Abstract (translated)
目前,从文本到图像扩散模型的主要特点是它们主要从文本提示中生成图像。然而,文本描述的固有简洁性使得在忠实合成具有复杂细节的图像方面存在挑战,例如特定实体或场景。本文介绍了 UNIMO-G,一种简单多模态条件扩散框架,它在多模态提示上进行操作,包括交替的文本和视觉输入,展示了文本驱动和主题驱动图像生成的统一能力。UNIMO-G 由两个核心组件组成:一个多模态大型语言模型(MLLM)用于编码多模态提示,和一个基于编码多模态输入的条件下去噪扩散网络用于生成图像。我们采用两阶段训练策略来有效训练框架:首先在大型文本图像对上进行预训练,以发展条件图像生成能力;然后通过多模态提示进行指令调整,实现统一图像生成能力。为了构建多模态提示,我们采用了一个涉及语言建模和图像分割的数据处理管道。UNIMO-G 在文本到图像生成和零散主题驱动合成方面表现出色,特别擅长从涉及多个图像实体的复杂多模态提示中生成高保真的图像。
URL
https://arxiv.org/abs/2401.13388