Abstract
3D GAN inversion aims to achieve high reconstruction fidelity and reasonable 3D geometry simultaneously from a single image input. However, existing 3D GAN inversion methods rely on time-consuming optimization for each individual case. In this work, we introduce a novel encoder-based inversion framework based on EG3D, one of the most widely-used 3D GAN models. We leverage the inherent properties of EG3D's latent space to design a discriminator and a background depth regularization. This enables us to train a geometry-aware encoder capable of converting the input image into corresponding latent code. Additionally, we explore the feature space of EG3D and develop an adaptive refinement stage that improves the representation ability of features in EG3D to enhance the recovery of fine-grained textural details. Finally, we propose an occlusion-aware fusion operation to prevent distortion in unobserved regions. Our method achieves impressive results comparable to optimization-based methods while operating up to 500 times faster. Our framework is well-suited for applications such as semantic editing.
Abstract (translated)
3D GAN 转换旨在同时从单个图像输入中实现高保真的三维几何和合理的重建。然而,现有的3D GAN转换方法依赖于每个个体情况下费时的优化。在这项工作中,我们介绍了基于EG3D(一种最常用的3D GAN模型)的新编码框架,EG3D是其中最常用的模型之一。我们利用EG3D的隐状态空间固有的性质设计了分而治之器和背景深度 Regularization。这使我们能够训练具有三维几何意识的编码器,将其输入图像转换为相应的隐编码。此外,我们探索了EG3D的特征空间并开发了自适应改进阶段,以提高EG3D中特征的表达能力,以增强细粒度纹理细节的恢复。最后,我们提出了一种有遮挡意识的融合操作,以避免未观测区域中的失真。我们的方法实现令人印象深刻的结果,与基于优化的方法相当,但运行速度高达500倍。我们的框架非常适合应用,例如语义编辑。
URL
https://arxiv.org/abs/2303.12326