Paper Reading AI Learner

GANILLA: Generative Adversarial Networks for Image to Illustration Translation

2020-02-13 17:12:09
Samet Hicsonmez, Nermin Samet, Emre Akbas, Pinar Duygulu

Abstract

In this paper, we explore illustrations in children's books as a new domain in unpaired image-to-image translation. We show that although the current state-of-the-art image-to-image translation models successfully transfer either the style or the content, they fail to transfer both at the same time. We propose a new generator network to address this issue and show that the resulting network strikes a better balance between style and content. There are no well-defined or agreed-upon evaluation metrics for unpaired image-to-image translation. So far, the success of image translation models has been based on subjective, qualitative visual comparison on a limited number of images. To address this problem, we propose a new framework for the quantitative evaluation of image-to-illustration models, where both content and style are taken into account using separate classifiers. In this new evaluation framework, our proposed model performs better than the current state-of-the-art models on the illustrations dataset. Our code and pretrained models can be found at this https URL.

Abstract (translated)

URL

https://arxiv.org/abs/2002.05638

PDF

https://arxiv.org/pdf/2002.05638