Paper Reading AI Learner

High-Fidelity Face Swapping with Style Blending

2023-12-17 23:22:37
Xinyu Yang, Hongbo Bo

Abstract

Face swapping has gained significant traction, driven by the plethora of human face synthesis facilitated by deep learning methods. However, previous face swapping methods that used generative adversarial networks (GANs) as backbones have faced challenges such as inconsistency in blending, distortions, artifacts, and issues with training stability. To address these limitations, we propose an innovative end-to-end framework for high-fidelity face swapping. First, we introduce a StyleGAN-based facial attributes encoder that extracts essential features from faces and inverts them into a latent style code, encapsulating indispensable facial attributes for successful face swapping. Second, we introduce an attention-based style blending module to effectively transfer Face IDs from source to target. To ensure accurate and quality transferring, a series of constraint measures including contrastive face ID learning, facial landmark alignment, and dual swap consistency is implemented. Finally, the blended style code is translated back to the image space via the style decoder, which is of high training stability and generative capability. Extensive experiments on the CelebA-HQ dataset highlight the superior visual quality of generated images from our face-swapping methodology when compared to other state-of-the-art methods, and the effectiveness of each proposed module. Source code and weights will be publicly available.

Abstract (translated)

面部换脸技术已经取得了显著的突破,得益于深度学习方法催生的丰富的人脸合成数据。然而,之前使用生成对抗网络(GANs)作为后端的换脸方法遇到了诸如混合不一致性、扭曲、伪影和训练稳定性等问题。为了应对这些局限,我们提出了一个高保真度面部换脸的端到端框架。首先,我们引入了一个基于StyleGAN的 facial属性编码器,从人脸中提取关键特征并将其转换为潜在风格代码,包含成功换脸所必需的面部属性。其次,我们引入了一个自注意力机制的换脸模块,有效地将源目标对人脸ID的转移。为确保准确和高质量的换脸,包括对比度人脸ID学习、面部关键点对齐和双替换一致性等的一系列约束措施得到了实现。最后,通过样式解码器将混合风格代码转换回图像空间,该解码器具有高训练稳定性和生成能力。对CelebA-HQ数据集的实验表明,与其他最先进的换脸方法相比,我们面部换脸方法生成的图像具有更高的视觉质量,每个提出的模块都具有显著的效果。源代码和权重将公开可用。

URL

https://arxiv.org/abs/2312.10843

PDF

https://arxiv.org/pdf/2312.10843.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot