Abstract
In this work we aim to develop a universal sketch grouper. That is, a grouper that can be applied to sketches of any category in any domain to group constituent strokes/segments into semantically meaningful object parts. The first obstacle to this goal is the lack of large-scale datasets with grouping annotation. To overcome this, we contribute the largest sketch perceptual grouping (SPG) dataset to date, consisting of 20,000 unique sketches evenly distributed over 25 object categories. Furthermore, we propose a novel deep universal perceptual grouping model. The model is learned with both generative and discriminative losses. The generative losses improve the generalisation ability of the model to unseen object categories and datasets. The discriminative losses include a local grouping loss and a novel global grouping loss to enforce global grouping consistency. We show that the proposed model significantly outperforms the state-of-the-art groupers. Further, we show that our grouper is useful for a number of sketch analysis tasks including sketch synthesis and fine-grained sketch-based image retrieval (FG-SBIR).
Abstract (translated)
在这项工作中,我们的目标是开发一个通用的草图石斑鱼。也就是说,可以应用于任何域中的任何类别的草图的石斑鱼,以将组成笔划/片段分组为具有语义意义的对象部分。实现这一目标的第一个障碍是缺乏具有分组注释的大规模数据集。为了克服这个问题,我们提供了迄今为止最大的草图感知分组(SPG)数据集,包括均匀分布在25个对象类别上的20,000个独特草图。此外,我们提出了一种新的深度通用感知分组模型。该模型以生成性和判别性损失进行学习。生成性损失提高了模型对看不见的对象类别和数据集的泛化能力。歧视性损失包括本地分组丢失和新的全局分组丢失以实施全局分组一致性。我们表明,所提出的模型明显优于最先进的石斑鱼。此外,我们展示了我们的石斑鱼可用于许多草图分析任务,包括草图合成和细粒度基于草图的图像检索(FG-SBIR)。
URL
https://arxiv.org/abs/1808.02312