Paper Reading AI Learner

3D-aware Image Generation and Editing with Multi-modal Conditions

2024-03-11 07:10:37
Bo Li, Yi-ke Li, Zhi-fen He, Bin Liu, Yun-Kun Lai

Abstract

3D-consistent image generation from a single 2D semantic label is an important and challenging research topic in computer graphics and computer vision. Although some related works have made great progress in this field, most of the existing methods suffer from poor disentanglement performance of shape and appearance, and lack multi-modal control. In this paper, we propose a novel end-to-end 3D-aware image generation and editing model incorporating multiple types of conditional inputs, including pure noise, text and reference image. On the one hand, we dive into the latent space of 3D Generative Adversarial Networks (GANs) and propose a novel disentanglement strategy to separate appearance features from shape features during the generation process. On the other hand, we propose a unified framework for flexible image generation and editing tasks with multi-modal conditions. Our method can generate diverse images with distinct noises, edit the attribute through a text description and conduct style transfer by giving a reference RGB image. Extensive experiments demonstrate that the proposed method outperforms alternative approaches both qualitatively and quantitatively on image generation and editing.

Abstract (translated)

3D-一致图像生成是一个重要的计算机图形学和计算机视觉研究课题。尽管在这个领域中已经取得了一些相关的进展,但现有的方法在形状和外观的分离性能上往往表现不佳,并且缺乏多模态控制。在本文中,我们提出了一个新型的端到端3D意识图像生成和编辑模型,包括纯噪声、文本和参考图像等多种条件输入。一方面,我们深入探索了3D生成对抗网络(GANs)的潜在空间,并提出了一种新的分离策略,在生成过程中将外观特征与形状特征分离。另一方面,我们提出了一个多模态条件下灵活图像生成和编辑任务的统一框架。我们的方法可以生成具有独特噪声的多样图像,通过文本描述编辑属性,并通过给定参考RGB图像进行风格迁移。大量实验证明,与 alternative 方法相比,所提出的方法在图像生成和编辑方面都表现出优异的性能。

URL

https://arxiv.org/abs/2403.06470

PDF

https://arxiv.org/pdf/2403.06470.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot