Paper Reading AI Learner

Make Encoder Great Again in 3D GAN Inversion through Geometry and Occlusion-Aware Encoding

2023-03-22 05:51:53
Ziyang Yuan, Yiming Zhu, Yu Li, Hongyu Liu, Chun Yuan

Abstract

3D GAN inversion aims to achieve high reconstruction fidelity and reasonable 3D geometry simultaneously from a single image input. However, existing 3D GAN inversion methods rely on time-consuming optimization for each individual case. In this work, we introduce a novel encoder-based inversion framework based on EG3D, one of the most widely-used 3D GAN models. We leverage the inherent properties of EG3D's latent space to design a discriminator and a background depth regularization. This enables us to train a geometry-aware encoder capable of converting the input image into corresponding latent code. Additionally, we explore the feature space of EG3D and develop an adaptive refinement stage that improves the representation ability of features in EG3D to enhance the recovery of fine-grained textural details. Finally, we propose an occlusion-aware fusion operation to prevent distortion in unobserved regions. Our method achieves impressive results comparable to optimization-based methods while operating up to 500 times faster. Our framework is well-suited for applications such as semantic editing.

Abstract (translated)

3D GAN 转换旨在同时从单个图像输入中实现高保真的三维几何和合理的重建。然而,现有的3D GAN转换方法依赖于每个个体情况下费时的优化。在这项工作中,我们介绍了基于EG3D(一种最常用的3D GAN模型)的新编码框架,EG3D是其中最常用的模型之一。我们利用EG3D的隐状态空间固有的性质设计了分而治之器和背景深度 Regularization。这使我们能够训练具有三维几何意识的编码器,将其输入图像转换为相应的隐编码。此外,我们探索了EG3D的特征空间并开发了自适应改进阶段,以提高EG3D中特征的表达能力,以增强细粒度纹理细节的恢复。最后,我们提出了一种有遮挡意识的融合操作,以避免未观测区域中的失真。我们的方法实现令人印象深刻的结果,与基于优化的方法相当,但运行速度高达500倍。我们的框架非常适合应用,例如语义编辑。

URL

https://arxiv.org/abs/2303.12326

PDF

https://arxiv.org/pdf/2303.12326.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot