Abstract
Recently, Image-to-Image Translation (IIT) has made great progress in enabling image style transfer and manipulation of semantic context in an image. However, existing approaches require exhaustive labelling of training data, which is labor demanding, difficult to scale up, and hard to adapt to a new domain. To overcome such a key limitation, we propose sparsely grouped generative adversarial networks(SG-GAN), a novel approach that can perform image translation in the sparsely grouped datasets, which most training data are mixed and just a few are labelled. SG-GAN with one-input multiple output architecture can be used for the translations among multiple groups using only a single trained model. As a case study for experimentally validating the advantages of our model, we apply the algorithm to tackle a series of tasks of attribute manipulation for facial images. Experiment results show that SG-GAN can achieve competitive results compared with previous state-of-the-art methods on adequately labelled datasets while attaining a superior quality of image translation results on sparsely grouped datasets where most data is mixed and only small parts are labelled.
Abstract (translated)
最近,图像到图像转换(IIT)在图像风格转换和图像语义上下文的处理方面取得了巨大进步。但是,现有的方法需要对培训数据进行详尽的标记,这对劳动力要求很高,难以扩大规模,难以适应新的领域。为了克服这种关键限制,我们提出了稀疏分组的生成对抗网络(SG-GAN),这是一种新颖的方法,可以在稀疏分组数据集中执行图像转换,大多数训练数据是混合的,只有少数被标记。具有单输入多输出体系结构的SG-GAN可用于多个组之间的翻译,只使用一个训练模型。作为实验验证我们模型的优点的案例研究,我们应用该算法来解决面部图像属性操作的一系列任务。实验结果表明,SG-GAN可以在充分标记的数据集上获得比以前最先进的方法的竞争结果,同时在大多数数据被混合并且只有小部分被标记的稀疏分组数据集上获得优异的图像转译结果质量。
URL
https://arxiv.org/abs/1805.07509