Paper Reading AI Learner

Seismic Fault Segmentation via 3D-CNN Training by a Few 2D Slices Labels

2021-05-09 07:13:40
YiMin Dou, Kewen Li, Jianbing Zhu, Xiao Li, Yingjie Xi

Abstract

Detection faults in seismic data is a crucial step for seismic structural interpretation, reservoir characterization and well placement, and it is full of challenges. Some recent works regard fault detection as an image segmentation task. The task of image segmentation requires a large amount of data labels, especially 3D seismic data, which has a complex structure and a lot of noise. Therefore, its annotation requires expert experience and a huge workload, wrong labeling and missing labeling will affect the segmentation performance of the model. In this study, we present a new binary cross-entropy and smooth L1 loss ({\lambda}-BCE and {\lambda}-smooth L1) to effectively train 3D-CNN by sampling some 2D slices from 3D seismic data, so that the model can learn the segmentation of 3D seismic data from a few 2D slices. In order to fully extract information from limited and low-dimensional data and suppress seismic noise, we propose an attention module that can be used for active supervision training (Active Attention Module, AAM) and embedded in the network to participate in the differentiation and optimization of the model. During training, the attention heatmap target is generated by the original binary label, and letting it supervise the attention module using the {\lambda}-smooth L1 loss. Qualitative experiments show that our method can extract 3D seismic features from a few 2D slices labels on real data, to segment a complete fault volume. Through visualization, the segmentation effect achieves state-of-the-art. Quantitative experiments on synthetic data prove the effectiveness of our training method and attention module. Experiments show that using our method, labeling one 2D slice every 30 frames at least (3.3% of the original label), the model can achieve a segmentation performance similar to that of a 3D label.

Abstract (translated)

URL

https://arxiv.org/abs/2105.03857

PDF

https://arxiv.org/pdf/2105.03857.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot