Paper Reading AI Learner

Image Manipulation with Natural Language using Two-sidedAttentive Conditional Generative Adversarial Network

2019-12-16 16:21:13
Dawei Zhu, Aditya Mogadala, Dietrich Klakow

Abstract

Altering the content of an image with photo editing tools is a tedious task for an inexperienced user. Especially, when modifying the visual attributes of a specific object in an image without affecting other constituents such as background etc. To simplify the process of image manipulation and to provide more control to users, it is better to utilize a simpler interface like natural language. Therefore, in this paper, we address the challenge of manipulating images using natural language description. We propose the Two-sidEd Attentive conditional Generative Adversarial Network (TEA-cGAN) to generate semantically manipulated images while preserving other contents such as background intact. TEA-cGAN uses fine-grained attention both in the generator and discriminator of Generative Adversarial Network (GAN) based framework at different scales. Experimental results show that TEA-cGAN which generates 128x128 and 256x256 resolution images outperforms existing methods on CUB and Oxford-102 datasets both quantitatively and qualitatively.

Abstract (translated)

URL

https://arxiv.org/abs/1912.07478

PDF

https://arxiv.org/pdf/1912.07478