Abstract
In this paper we are concerned with the challenging problem of producing a full image sequence of a deformable face given only an image and generic facial motions encoded by a set of sparse landmarks. To this end we build upon recent breakthroughs in image-to-image translation such as pix2pix, CycleGAN and StarGAN which learn Deep Convolutional Neural Networks (DCNNs) that learn to map aligned pairs or images between different domains (i.e., having different labels) and propose a new architecture which is not driven any more by labels but by spatial maps, facial landmarks. In particular, we propose the MotionGAN which transforms an input face image into a new one according to a heatmap of target landmarks. We show that it is possible to create very realistic face videos using a single image and a set of target landmarks. Furthermore, our method can be used to edit a facial image with arbitrary motions according to landmarks (e.g., expression, speech, etc.). This provides much more flexibility to face editing, expression transfer, facial video creation, etc. than models based on discrete expressions, audios or action units.
Abstract (translated)
在本文中,我们关注的是一个具有挑战性的问题,即在只有一组稀疏的标志物编码的图像和一般面部运动的情况下,生成一个可变形面部的完整图像序列。为此,我们基于图像到图像翻译的最新突破,如pix2pix、cyclegan和stargan,它们学习深度卷积神经网络(dcnns),学习在不同域(即具有不同标签)之间映射对齐对或图像,并提出了一种新的架构,该架构不再由标签驱动,而是由SP驱动。地图,面部地标。特别地,我们提出了根据目标地标的热图将输入的人脸图像转换成新的人脸图像的运动感测器。我们表明,有可能用一张图片和一组目标地标来创建非常逼真的面部视频。此外,我们的方法可用于根据地标(如表情、语言等)编辑任意运动的面部图像。与基于离散表达式、音频或动作单元的模型相比,这为面部编辑、表情转换、面部视频创建等提供了更大的灵活性。
URL
https://arxiv.org/abs/1904.11521