Abstract
We introduce LUCSS, a language-based system for interactive col- orization of scene sketches, based on their semantic understanding. LUCSS is built upon deep neural networks trained via a large-scale repository of scene sketches and cartoon-style color images with text descriptions. It con- sists of three sequential modules. First, given a scene sketch, the segmenta- tion module automatically partitions an input sketch into individual object instances. Next, the captioning module generates the text description with spatial relationships based on the instance-level segmentation results. Fi- nally, the interactive colorization module allows users to edit the caption and produce colored images based on the altered caption. Our experiments show the effectiveness of our approach and the desirability of its compo- nents to alternative choices.
Abstract (translated)
我们引入了LUCSS,这是一种基于语言的系统,用于基于语义理解的场景草图的交互式校准。 LUCSS建立在深度神经网络的基础上,通过大型场景草图和带有文本描述的卡通风格彩色图像进行训练。它由三个顺序模块组成。首先,给定场景草图,分段模块自动将输入草图分区为单个对象实例。接下来,字幕模块基于实例级分割结果生成具有空间关系的文本描述。最后,交互式着色模块允许用户根据改变的标题编辑标题并生成彩色图像。我们的实验表明了我们的方法的有效性以及其组件对替代选择的可取性。
URL
https://arxiv.org/abs/1808.10544