Abstract
Generating images from word descriptions is a challenging task. Generative adversarial networks(GANs) are shown to be able to generate realistic images of real-life objects. In this paper, we propose a new neural network architecture of LSTM Conditional Generative Adversarial Networks to generate images of real-life objects. Our proposed model is trained on the Oxford-102 Flowers and Caltech-UCSD Birds-200-2011 datasets. We demonstrate that our proposed model produces the better results surpassing other state-of-art approaches.
Abstract (translated)
从文字描述生成图像是一项具有挑战性的任务。生成的敌对网络(GAN)被证明能够生成逼真的真实物体图像。在本文中,我们提出了一种新的LSTM条件生成对抗网络的神经网络架构来生成真实对象的图像。我们提出的模型是在Oxford-102 Flowers和Caltech-UCSD Birds-200-2011数据集上进行培训的。我们证明,我们提出的模型产生了超越其他最先进方法的更好结果。
URL
https://arxiv.org/abs/1806.03027