Paper Reading AI Learner

QuakeSet: A Dataset and Low-Resource Models to Monitor Earthquakes through Sentinel-1

2024-03-26 21:45:29
Daniele Rege Cambrin, Paolo Garza

Abstract

Earthquake monitoring is necessary to promptly identify the affected areas, the severity of the events, and, finally, to estimate damages and plan the actions needed for the restoration process. The use of seismic stations to monitor the strength and origin of earthquakes is limited when dealing with remote areas (we cannot have global capillary coverage). Identification and analysis of all affected areas is mandatory to support areas not monitored by traditional stations. Using social media images in crisis management has proven effective in various situations. However, they are still limited by the possibility of using communication infrastructures in case of an earthquake and by the presence of people in the area. Moreover, social media images and messages cannot be used to estimate the actual severity of earthquakes and their characteristics effectively. The employment of satellites to monitor changes around the globe grants the possibility of exploiting instrumentation that is not limited by the visible spectrum, the presence of land infrastructures, and people in the affected areas. In this work, we propose a new dataset composed of images taken from Sentinel-1 and a new series of tasks to help monitor earthquakes from a new detailed view. Coupled with the data, we provide a series of traditional machine learning and deep learning models as baselines to assess the effectiveness of ML-based models in earthquake analysis.

Abstract (translated)

地震监测是迅速识别受影响地区、事件严重程度,并最终估计损失和规划恢复过程的必要手段。利用地震站监测地震的强度和起源是有局限性的,尤其是在处理远程地区时(我们无法实现全球毛细管覆盖)。所有受影响的区域的识别和分析是强制性的,以支持没有传统站点的地区。在危机管理中使用社交媒体图像已经证明在各种情况下非常有效。然而,它们仍然受到在地震发生时使用通信基础设施以及在影响区域存在人员的限制。此外,社交媒体图像和信息无法有效估计地震的实际严重程度及其特征。利用卫星监测全球范围内的变化为利用不局限于可见频谱的仪器提供了可能,不受地区人口和基础设施的影响。在这项工作中,我们提出了一个由来自Sentinel-1的图像组成的新数据集,以及一系列新的任务,从新的详细角度监测地震。结合数据,我们提供了传统机器学习模型和深度学习模型作为 baseline,以评估基于 ML 的模型的有效性。

URL

https://arxiv.org/abs/2403.18116

PDF

https://arxiv.org/pdf/2403.18116.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot