Abstract
Remote sensing imagery, despite its broad applications in helping achieve Sustainable Development Goals and tackle climate change, has not yet benefited from the recent advancements of versatile, task-agnostic vision language models (VLMs). A key reason is that the large-scale, semantically diverse image-text dataset required for developing VLMs is still absent for remote sensing images. Unlike natural images, remote sensing images and their associated text descriptions cannot be efficiently collected from the public Internet at scale. In this work, we bridge this gap by using geo-coordinates to automatically connect open, unlabeled remote sensing images with rich semantics covered in OpenStreetMap, and thus construct SkyScript, a comprehensive vision-language dataset for remote sensing images, comprising 2.6 million image-text pairs covering 29K distinct semantic tags. With continual pre-training on this dataset, we obtain a VLM that surpasses baseline models with a 6.2% average accuracy gain in zero-shot scene classification across seven benchmark datasets. It also demonstrates the ability of zero-shot transfer for fine-grained object attribute classification and cross-modal retrieval. We hope this dataset can support the advancement of VLMs for various multi-modal tasks in remote sensing, such as open-vocabulary classification, retrieval, captioning, and text-to-image synthesis.
Abstract (translated)
尽管遥感影像在帮助实现可持续发展目标应对气候变化方面具有广泛的应用,但尚未充分利用最近的多功能、任务无关的视觉语言模型(VLMs)的先进技术。一个关键的原因是,用于开发VLMs的大规模、语义多样图像-文本数据集仍然缺失。与自然图像不同,远程 sensing图像及其相关文本描述无法从公共互联网上以大规模方式收集。在这项工作中,我们通过使用地理坐标将开放、未标记的远程 sensing图像与OpenStreetMap上丰富的语义覆盖连接起来,从而构建了 SkyScript,一个遥感图像 comprehensive vision-language 数据集,包括29K个不同的语义标签的图像-文本对。通过持续在這個數據集上進行预训练,我们在零散景观分类基准數據集上实现了基線模型的6.2%平均準確率增長。它還展示了零散轉移進行精细语義屬性分類和跨模态检索的能力。我们希望這個數據集能夠支持各種多模态遥感任務的VLMs發展,例如开放式词汇分類、檢索、旁白和文本到圖像合成。
URL
https://arxiv.org/abs/2312.12856