Abstract
Generative adversary networks (GANs) have recently led to highly realistic image synthesis results. In this work, we describe a new method to expose GAN-synthesized images using the locations of the facial landmark points. Our method is based on the observations that the facial parts configuration generated by GAN models are different from those of the real faces, due to the lack of global constraints. We perform experiments demonstrating this phenomenon, and show that an SVM classifier trained using the locations of facial landmark points is sufficient to achieve good classification performance for GAN-synthesized faces.
Abstract (translated)
生成对抗网络(gans)最近取得了非常逼真的图像合成结果。在这项工作中,我们描述了一种新的方法来暴露氮化镓合成图像使用的位置的面部标志点。我们的方法是基于观察到的,由于缺少全局约束,由GaN模型生成的面部部件配置与真实的面部不同。我们通过实验证明了这一现象,并证明了利用人脸标志点位置训练的支持向量机分类器足以实现GaN合成人脸的良好分类性能。
URL
https://arxiv.org/abs/1904.00167