Abstract
Talking face generation aims to synthesize a sequence of face images that correspond to given speech semantics. However, when people talk, the subtle movements of their face region are usually a complex combination of the intrinsic face appearance of the subject and also the extrinsic speech to be delivered. Existing works either focus on the former, which constructs the specific face appearance model on a single subject; or the latter, which models the identity-agnostic transformation between lip motion and speech. In this work, we integrate both aspects and enable arbitrary-subject talking face generation by learning disentangled audio-visual representation. We assume the talking face sequence is actually a composition of both subject-related information and speech-related information. These two spaces are then explicitly disentangled through a novel associative-and-adversarial training process. The disentangled representation has an additional advantage that both audio and video can serve as the source of speech information for generation. Extensive experiments show that our proposed approach can generate realistic talking face sequences on arbitrary subjects with much clearer lip motion patterns. We also demonstrate the learned audio-visual representation is extremely useful for applications like automatic lip reading and audio-video retrieval.
Abstract (translated)
会话面部生成旨在合成与给定语音语义相对应的一系列面部图像。然而,当人们说话时,他们的面部区域的微妙运动通常是主体的内在面部外观以及要传递的外在言语的复杂组合。现有作品要么专注于前者,要么在一个主题上构建特定的面部外观模型;或者后者,它模拟唇部运动和言语之间的身份不可知转换。在这项工作中,我们整合了两个方面,并通过学习解开的视听表示来实现任意主题的谈话面部生成。我们假设说话面部序列实际上是主题相关信息和语音相关信息的组合。然后通过新颖的关联和对抗训练过程明确地解开这两个空间。解缠结的表示具有另外的优点,即音频和视频都可以用作用于生成的语音信息的源。大量实验表明,我们提出的方法可以在任意对象上生成逼真的谈话面部序列,具有更清晰的唇部运动模式。我们还演示了学习的视听表示对于自动唇读和音频视频检索等应用非常有用。
URL
https://arxiv.org/abs/1807.07860