Abstract
Video scene parsing incorporates temporal information, which can enhance the consistency and accuracy of predictions compared to image scene parsing. The added temporal dimension enables a more comprehensive understanding of the scene, leading to more reliable results. This paper presents the winning solution of the CVPR2023 workshop for video semantic segmentation, focusing on enhancing Spatial-Temporal correlations with contrastive loss. We also explore the influence of multi-dataset training by utilizing a label-mapping technique. And the final result is aggregating the output of the above two models. Our approach achieves 65.95% mIoU performance on the VSPW dataset, ranked 1st place on the VSPW challenge at CVPR 2023.
Abstract (translated)
视频场景解析加入了时间信息,可以相较于图像场景解析提高预测的一致性和准确性。增加了时间维度可以实现更全面的理解场景,进而获得更加可靠的结果。本文介绍了CVPR2023年视频语义分割 workshop 中获胜的解决方案,重点研究了增强空间-时间相关性并使用对比损失的方法。此外,我们还使用标签映射技术探讨了多数据集训练的影响。最终的成果是合并了以上两个模型的输出。我们的方法在VSPW数据集上实现了65.95%的IoU表现,在CVPR2023年的VSPW挑战中排名第一。
URL
https://arxiv.org/abs/2306.03508