Abstract
In this work, we present an interesting attempt on mixture generation: absorbing different image concepts (e.g., content and style) from different domains and thus generating a new domain with learned concepts. In particular, we propose a mixture generative adversarial network (MIXGAN). MIXGAN learns concepts of content and style from two domains respectively, and thus can join them for mixture generation in a new domain, i.e., generating images with content from one domain and style from another. MIXGAN overcomes the limitation of current GAN-based models which either generate new images in the same domain as they observed in training stage, or require off-the-shelf content templates for transferring or translation. Extensive experimental results demonstrate the effectiveness of MIXGAN as compared to related state-of-the-art GAN-based models.
Abstract (translated)
在这项工作中,我们对混合生成提出了一个有趣的尝试:从不同的领域吸收不同的图像概念(例如,内容和风格),从而生成具有学习概念的新领域。特别是,我们提出了混合生成对抗网络(MIXGAN)。 MIXGAN分别从两个域学习内容和样式的概念,因此可以将它们加入到新域中的混合生成中,即,生成具有来自一个域的内容和来自另一个域的样式的图像。 MIXGAN克服了当前基于GAN的模型的局限性,这些模型要么在训练阶段观察到的同一域中生成新图像,要么需要现成的内容模板进行传输或翻译。广泛的实验结果证明了MIXGAN与相关的最先进的基于GAN的模型相比的有效性。
URL
https://arxiv.org/abs/1807.01659