Abstract
In this paper, we consider the problem of open-vocabulary semantic segmentation (OVS), which aims to segment objects of arbitrary classes instead of pre-defined, closed-set categories. The main contributions are as follows: First, we propose a transformer-based model for OVS, termed as OVSegmentor, which only exploits web-crawled image-text pairs for pre-training without using any mask annotations. OVSegmentor assembles the image pixels into a set of learnable group tokens via a slot-attention based binding module, and aligns the group tokens to the corresponding caption embedding. Second, we propose two proxy tasks for training, namely masked entity completion and cross-image mask consistency. The former aims to infer all masked entities in the caption given the group tokens, that enables the model to learn fine-grained alignment between visual groups and text entities. The latter enforces consistent mask predictions between images that contain shared entities, which encourages the model to learn visual invariance. Third, we construct CC4M dataset for pre-training by filtering CC12M with frequently appeared entities, which significantly improves training efficiency. Fourth, we perform zero-shot transfer on three benchmark datasets, PASCAL VOC 2012, PASCAL Context, and COCO Object. Our model achieves superior segmentation results over the state-of-the-art method by using only 3\% data (4M vs 134M) for pre-training. Code and pre-trained models will be released for future research.
Abstract (translated)
在本文中,我们考虑了开放词汇语义分割(OVS)的问题,该问题旨在将任意类的对象而不是预先定义的封闭集合类划分为小团体。以下是其主要贡献:第一,我们提出了一个基于Transformer的模型,称为OVSegmentor,该模型仅利用网上爬取的图像文本对进行预训练,而不需要任何Masked词语注释。OVSegmentor通过一个注意力slot based绑定模块将图像像素组装成一组可学习的群体元,并将群体元对齐到相应的标题嵌入中。第二,我们提出了两个训练代理任务,即Masked实体完成和跨图像Mask一致性。前者旨在根据群体元推断标题中的所有Masked实体,从而使模型学会视觉群体和文本实体的精细对齐。后者强制在包含共享实体的图像之间执行一致的Mask预测,从而鼓励模型学习视觉不变性。第三,我们建立了CC4M预训练数据集,通过过滤CC12M中频繁出现实体,显著提高了训练效率。第四,我们进行了零样本传输,在三个基准数据集上,包括PASCAL VOC 2012、PASCAL Context和COCO对象,我们的模型比最先进的方法实现了更好的分割结果,仅使用预训练数据中的3\%(4M vs 134M)进行预训练。代码和预训练模型将用于未来的研究。
URL
https://arxiv.org/abs/2301.09121