Paper Reading AI Learner

UNIMO-G: Unified Image Generation through Multimodal Conditional Diffusion

2024-01-24 11:36:44
Wei Li, Xue Xu, Jiachen Liu, Xinyan Xiao

Abstract

Existing text-to-image diffusion models primarily generate images from text prompts. However, the inherent conciseness of textual descriptions poses challenges in faithfully synthesizing images with intricate details, such as specific entities or scenes. This paper presents \textbf{UNIMO-G}, a simple multimodal conditional diffusion framework that operates on multimodal prompts with interleaved textual and visual inputs, which demonstrates a unified ability for both text-driven and subject-driven image generation. UNIMO-G comprises two core components: a Multimodal Large Language Model (MLLM) for encoding multimodal prompts, and a conditional denoising diffusion network for generating images based on the encoded multimodal input. We leverage a two-stage training strategy to effectively train the framework: firstly pre-training on large-scale text-image pairs to develop conditional image generation capabilities, and then instruction tuning with multimodal prompts to achieve unified image generation proficiency. A well-designed data processing pipeline involving language grounding and image segmentation is employed to construct multi-modal prompts. UNIMO-G excels in both text-to-image generation and zero-shot subject-driven synthesis, and is notably effective in generating high-fidelity images from complex multimodal prompts involving multiple image entities.

Abstract (translated)

目前,从文本到图像扩散模型的主要特点是它们主要从文本提示中生成图像。然而,文本描述的固有简洁性使得在忠实合成具有复杂细节的图像方面存在挑战,例如特定实体或场景。本文介绍了 UNIMO-G,一种简单多模态条件扩散框架,它在多模态提示上进行操作,包括交替的文本和视觉输入,展示了文本驱动和主题驱动图像生成的统一能力。UNIMO-G 由两个核心组件组成:一个多模态大型语言模型(MLLM)用于编码多模态提示,和一个基于编码多模态输入的条件下去噪扩散网络用于生成图像。我们采用两阶段训练策略来有效训练框架:首先在大型文本图像对上进行预训练,以发展条件图像生成能力;然后通过多模态提示进行指令调整,实现统一图像生成能力。为了构建多模态提示,我们采用了一个涉及语言建模和图像分割的数据处理管道。UNIMO-G 在文本到图像生成和零散主题驱动合成方面表现出色,特别擅长从涉及多个图像实体的复杂多模态提示中生成高保真的图像。

URL

https://arxiv.org/abs/2401.13388

PDF

https://arxiv.org/pdf/2401.13388.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot