Abstract
We present a novel method for constructing Variational Autoencoder (VAE). Instead of using pixel-by-pixel loss, we enforce deep feature consistency between the input and the output of a VAE, which ensures the VAE's output to preserve the spatial correlation characteristics of the input, thus leading the output to have a more natural visual appearance and better perceptual quality. Based on recent deep learning works such as style transfer, we employ a pre-trained deep convolutional neural network (CNN) and use its hidden features to define a feature perceptual loss for VAE training. Evaluated on the CelebA face dataset, we show that our model produces better results than other methods in the literature. We also show that our method can produce latent vectors that can capture the semantic information of face expressions and can be used to achieve state-of-the-art performance in facial attribute prediction.
Abstract (translated)
我们提出了一种新的用于构建变分自编码器(VAE)的方法。我们不再使用像素级损失,而是确保VAE的输入和输出之间具有深度特征的 consistency,从而确保VAE的输出保留输入的空间关联特性,从而使输出具有更自然的外观和更好的感知质量。基于最近的学习工作,如风格迁移,我们使用预训练的深度卷积神经网络(CNN)并将其隐藏特征用于定义VAE训练时的特征感知损失。在CelebA面部数据集上进行评估,我们证明了我们的模型在文献中的其他方法中具有更好的性能。我们还证明了我们的方法可以生成具有捕捉面部表情语义信息的潜在向量,并且可以用于实现面部属性的最佳性能。
URL
https://arxiv.org/abs/1610.00291