Abstract
We study the potential of noisy labels y to pretrain semantic segmentation models in a multi-modal learning framework for geospatial applications. Specifically, we propose a novel Cross-modal Sample Selection method (CromSS) that utilizes the class distributions P^{(d)}(x,c) over pixels x and classes c modelled by multiple sensors/modalities d of a given geospatial scene. Consistency of predictions across sensors $d$ is jointly informed by the entropy of P^{(d)}(x,c). Noisy label sampling we determine by the confidence of each sensor d in the noisy class label, P^{(d)}(x,c=y(x)). To verify the performance of our approach, we conduct experiments with Sentinel-1 (radar) and Sentinel-2 (optical) satellite imagery from the globally-sampled SSL4EO-S12 dataset. We pair those scenes with 9-class noisy labels sourced from the Google Dynamic World project for pretraining. Transfer learning evaluations (downstream task) on the DFC2020 dataset confirm the effectiveness of the proposed method for remote sensing image segmentation.
Abstract (translated)
我们研究了在多模态学习框架中,对噪声标签y对预训练语义分割模型的潜在能力。具体来说,我们提出了一种新颖的跨模态样本选择方法(CromSS),它利用了给定地理场景中多个传感器/模态d的P(x,c)类分布。通过传感器d的置信度告知预测的一致性。通过每个传感器d对噪声分类标签y的置信度来确定噪声标签采样。为了验证我们方法的性能,我们使用全球采样SSL4EO-S12数据集中的Sentinel-1(雷达)和Sentinel-2(光学)卫星图像进行实验。我们将这些场景与来自Google Dynamic World项目的9类噪声标签进行预训练。在DFC2020数据集上的迁移学习评估(下游任务)证实了所提出方法在遥感图像分割方面的有效性。
URL
https://arxiv.org/abs/2405.01217