Paper Reading AI Learner

Spatio-Temporal SwinMAE: A Swin Transformer based Multiscale Representation Learner for Temporal Satellite Imagery

2024-05-03 22:55:56
Yohei Nakayama, Jiawei Su

Abstract

Currently, the foundation models represented by large language models have made dramatic progress and are used in a very wide range of domains including 2D and 3D vision. As one of the important application domains of foundation models, earth observation has attracted attention and various approaches have been developed. When considering earth observation as a single image capture, earth observation imagery can be processed as an image with three or more channels, and when it comes with multiple image captures of different timestamps at one location, the temporal observation can be considered as a set of continuous image resembling video frames or medical SCAN slices. This paper presents Spatio-Temporal SwinMAE (ST-SwinMAE), an architecture which particularly focuses on representation learning for spatio-temporal image processing. Specifically, it uses a hierarchical Masked Auto-encoder (MAE) with Video Swin Transformer blocks. With the architecture, we present a pretrained model named Degas 100M as a geospatial foundation model. Also, we propose an approach for transfer learning with Degas 100M, which both pretrained encoder and decoder of MAE are utilized with skip connections added between them to achieve multi-scale information communication, forms an architecture named Spatio-Temporal SwinUNet (ST-SwinUNet). Our approach shows significant improvements of performance over existing state-of-the-art of foundation models. Specifically, for transfer learning of the land cover downstream task on the PhilEO Bench dataset, it shows 10.4\% higher accuracy compared with other geospatial foundation models on average.

Abstract (translated)

目前,大型语言模型所代表的基模型已经取得了显著的进步,并在包括2D和3D视觉在内的各种领域得到了广泛应用。作为基础模型的重要应用领域之一,地球观测吸引了人们的注意,并开发了各种方法。当将地球观测视为单张图像捕捉时,地球观测图像可以处理为具有三个或更多通道的图像,而当它位于一个位置的多个不同时间戳的图像捕捉时,时间观察可以被视为一系列连续的图像,类似于视频帧或医学SCAN切片。本文介绍了一种名为Spatio-Temporal SwinMAE(ST-SwinMAE)的架构,该架构特别关注空间-时间图像处理中的表示学习。具体来说,它使用了一个层次化的遮罩自编码器(MAE)和视频Swin Transformer块。通过这种架构,我们提出了一个名为Spatio-Temporal SwinUNet(ST-SwinUNet)的预训练模型。我们还提出了一种使用Degas 100M作为空间-时间基础模型的迁移学习方法,该模型包括MAE的预训练编码器和解码器,并在它们之间添加了跳跃连接以实现多尺度信息交流,形成了一个名为Spatio-Temporal SwinUNet的架构。我们的方法在现有基础模型性能上显示出显著的改进。具体来说,在菲欧埃奥基准数据集上对地表覆盖下游任务的迁移学习中,它比其他空间-时间基础模型平均高10.4%。

URL

https://arxiv.org/abs/2405.02512

PDF

https://arxiv.org/pdf/2405.02512.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot