Abstract
Weakly supervised semantic segmentation (WSSS) aims at learning a semantic segmentation model with only image-level tags. Despite intensive research on deep learning approaches over a decade, there is still a significant performance gap between WSSS and full semantic segmentation. Most current WSSS methods always focus on a limited single image (pixel-wise) information while ignoring the valuable inter-image (semantic-wise) information. From this perspective, a novel end-to-end WSSS framework called DSCNet is developed along with two innovations: i) pixel-wise group contrast and semantic-wise graph contrast are proposed and introduced into the WSSS framework; ii) a novel dual-stream contrastive learning (DSCL) mechanism is designed to jointly handle pixel-wise and semantic-wise context information for better WSSS performance. Specifically, the pixel-wise group contrast learning (PGCL) and semantic-wise graph contrast learning (SGCL) tasks form a more comprehensive solution. Extensive experiments on PASCAL VOC and MS COCO benchmarks verify the superiority of DSCNet over SOTA approaches and baseline models.
Abstract (translated)
弱监督语义分割(WSSS)旨在学习仅基于图像级别的标签的语义分割模型。尽管在过去的十年里对深度学习方法进行了广泛研究,但WSSS和完整语义分割之间的性能差距仍然很大。大多数当前的WSSS方法始终关注有限单个图像(像素级)信息,而忽略了宝贵的跨图像(语义级)信息。从这方面来看,与两个创新相结合,我们提出了一个名为DSCNet的新端到端WSSS框架:i)提出了像素级组内对比和语义级图内对比;ii)设计了一种新颖的双流对比学习(DSCL)机制,以更好地处理像素级和语义级上下文信息,从而提高WSSS性能。具体来说,像素级组内对比学习(PGCL)和语义级图内对比学习(SGCL)任务组成更全面的解决方案。在PASCAL VOC和MS COCO基准上进行的实验证实了DSCNet相对于当前最先进的方法的优越性。
URL
https://arxiv.org/abs/2405.04913