Paper Reading AI Learner

Talking Face Generation by Adversarially Disentangled Audio-Visual Representation

2018-07-20 14:26:32
Hang Zhou, Yu Liu, Ziwei Liu, Ping Luo, Xiaogang Wang

Abstract

Talking face generation aims to synthesize a sequence of face images that correspond to given speech semantics. However, when people talk, the subtle movements of their face region are usually a complex combination of the intrinsic face appearance of the subject and also the extrinsic speech to be delivered. Existing works either focus on the former, which constructs the specific face appearance model on a single subject; or the latter, which models the identity-agnostic transformation between lip motion and speech. In this work, we integrate both aspects and enable arbitrary-subject talking face generation by learning disentangled audio-visual representation. We assume the talking face sequence is actually a composition of both subject-related information and speech-related information. These two spaces are then explicitly disentangled through a novel associative-and-adversarial training process. The disentangled representation has an additional advantage that both audio and video can serve as the source of speech information for generation. Extensive experiments show that our proposed approach can generate realistic talking face sequences on arbitrary subjects with much clearer lip motion patterns. We also demonstrate the learned audio-visual representation is extremely useful for applications like automatic lip reading and audio-video retrieval.

Abstract (translated)

会话面部生成旨在合成与给定语音语义相对应的一系列面部图像。然而,当人们说话时,他们的面部区域的微妙运动通常是主体的内在面部外观以及要传递的外在言语的复杂组合。现有作品要么专注于前者,要么在一个主题上构建特定的面部外观模型;或者后者,它模拟唇部运动和言语之间的身份不可知转换。在这项工作中,我们整合了两个方面,并通过学习解开的视听表示来实现任意主题的谈话面部生成。我们假设说话面部序列实际上是主题相关信息和语音相关信息的组合。然后通过新颖的关联和对抗训练过程明确地解开这两个空间。解缠结的表示具有另外的优点,即音频和视频都可以用作用于生成的语音信息的源。大量实验表明,我们提出的方法可以在任意对象上生成逼真的谈话面部序列,具有更清晰的唇部运动模式。我们还演示了学习的视听表示对于自动唇读和音频视频检索等应用非常有用。

URL

https://arxiv.org/abs/1807.07860

PDF

https://arxiv.org/pdf/1807.07860.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot