Abstract
3D-consistent image generation from a single 2D semantic label is an important and challenging research topic in computer graphics and computer vision. Although some related works have made great progress in this field, most of the existing methods suffer from poor disentanglement performance of shape and appearance, and lack multi-modal control. In this paper, we propose a novel end-to-end 3D-aware image generation and editing model incorporating multiple types of conditional inputs, including pure noise, text and reference image. On the one hand, we dive into the latent space of 3D Generative Adversarial Networks (GANs) and propose a novel disentanglement strategy to separate appearance features from shape features during the generation process. On the other hand, we propose a unified framework for flexible image generation and editing tasks with multi-modal conditions. Our method can generate diverse images with distinct noises, edit the attribute through a text description and conduct style transfer by giving a reference RGB image. Extensive experiments demonstrate that the proposed method outperforms alternative approaches both qualitatively and quantitatively on image generation and editing.
Abstract (translated)
3D-一致图像生成是一个重要的计算机图形学和计算机视觉研究课题。尽管在这个领域中已经取得了一些相关的进展,但现有的方法在形状和外观的分离性能上往往表现不佳,并且缺乏多模态控制。在本文中,我们提出了一个新型的端到端3D意识图像生成和编辑模型,包括纯噪声、文本和参考图像等多种条件输入。一方面,我们深入探索了3D生成对抗网络(GANs)的潜在空间,并提出了一种新的分离策略,在生成过程中将外观特征与形状特征分离。另一方面,我们提出了一个多模态条件下灵活图像生成和编辑任务的统一框架。我们的方法可以生成具有独特噪声的多样图像,通过文本描述编辑属性,并通过给定参考RGB图像进行风格迁移。大量实验证明,与 alternative 方法相比,所提出的方法在图像生成和编辑方面都表现出优异的性能。
URL
https://arxiv.org/abs/2403.06470