Abstract
Recent years have witnessed some exciting developments in the domain of generating images from scene-based text descriptions. These approaches have primarily focused on generating images from a static text description and are limited to generating images in a single pass. They are unable to generate an image interactively based on an incrementally additive text description (something that is more intuitive and similar to the way we describe an image). We propose a method to generate an image incrementally based on a sequence of graphs of scene descriptions (scene-graphs). We propose a recurrent network architecture that preserves the image content generated in previous steps and modifies the cumulative image as per the newly provided scene information. Our model utilizes Graph Convolutional Networks (GCN) to cater to variable-sized scene graphs along with Generative Adversarial image translation networks to generate realistic multi-object images without needing any intermediate supervision during training. We experiment with Coco-Stuff dataset which has multi-object images along with annotations describing the visual scene and show that our model significantly outperforms other approaches on the same dataset in generating visually consistent images for incrementally growing scene graphs.
Abstract (translated)
近年来,基于场景的文本描述在生成图像方面取得了一些令人振奋的进展。这些方法主要集中于从静态文本描述生成图像,并且仅限于在单个过程中生成图像。他们无法基于递增的加性文本描述(与我们描述图像的方式相似,更直观)以交互方式生成图像。提出了一种基于场景描述图序列(场景图)增量生成图像的方法。我们提出了一种循环网络结构,它保留了先前步骤中生成的图像内容,并根据新提供的场景信息修改了累积图像。我们的模型利用了图形卷积网络(GCN)来满足不同大小的场景图以及生成的对抗性图像转换网络,以生成真实的多目标图像,而不需要在训练期间进行任何中间监督。我们对Coco-Stuff数据集进行了实验,该数据集包含多个对象图像以及描述视觉场景的注释,并表明我们的模型在为递增的场景图生成视觉一致性图像方面明显优于同一数据集的其他方法。
URL
https://arxiv.org/abs/1905.03743