Abstract
Previous methods have dealt with discrete manipulation of facial attributes such as smile, sad, angry, surprise etc, out of canonical expressions and they are not scalable, operating in single modality. In this paper, we propose a novel framework that supports continuous edits and multi-modality portrait manipulation using adversarial learning. Specifically, we adapt cycle-consistency into the conditional setting by leveraging additional facial landmarks information. This has two effects: first cycle mapping induces bidirectional manipulation and identity preserving; second pairing samples from different modalities can thus be utilized. To ensure high-quality synthesis, we adopt texture-loss that enforces texture consistency and multi-level adversarial supervision that facilitates gradient flow. Quantitative and qualitative experiments show the effectiveness of our framework in performing flexible and multi-modality portrait manipulation with photo-realistic effects.
Abstract (translated)
以前的方法已经处理了面部属性的离散操作,例如微笑,悲伤,愤怒,惊讶等,超出规范表达,并且它们不可扩展,以单一模态操作。在本文中,我们提出了一个新的框架,支持使用对抗性学习的连续编辑和多模态肖像操作。具体而言,我们通过利用额外的面部地标信息,使周期一致性适应条件设置。这有两个作用:第一周期映射引起双向操作和身份保持;因此可以利用来自不同模态的第二配对样本。为了确保高质量的合成,我们采用纹理损失来强制纹理一致性和多级对抗监督,以促进梯度流动。定量和定性实验表明我们的框架在执行具有照片效果效果的灵活和多模态肖像操作方面的有效性。
URL
https://arxiv.org/abs/1807.01826