Paper Reading AI Learner

MnEdgeNet -- Accurate Decomposition of Mixed Oxidation States for Mn XAS and EELS L2,3 Edges without Reference and Calibration

2022-10-21 01:04:24
Huolin L. Xin, Mike Hu

Abstract

Accurate decomposition of the mixed Mn oxidation states is highly important for characterizing the electronic structures, charge transfer, and redox centers for electronic, electrocatalytic, and energy storage materials that contain Mn. Electron energy loss spectroscopy (EELS) and soft X-ray absorption spectroscopy (XAS) measurements of the Mn L2,3 edges are widely used for this purpose. To date, although the measurement of the Mn L2,3 edges is straightforward given the sample is prepared properly, an accurate decomposition of the mix valence states of Mn remains non-trivial. For both EELS and XAS, 2+, 3+, 4+ reference spectra need to be taken on the same instrument/beamline and preferably in the same experimental session because the instrumental resolution and the energy axis offset could vary from one session to another. To circumvent this hurdle, in this study, we adopted a deep learning approach and developed a calibration-free and reference-free method to decompose the oxidation state of Mn L2,3 edges for both EELS and XAS. To synthesize physics-informed and ground-truth labeled training datasets, we created a forward model that takes into account plural scattering, instrumentation broadening, noise, and energy axis offset. With that, we created a 1.2 million-spectrum database with a three-element oxidation state composition label. The library includes a sufficient variety of data including both EELS and XAS spectra. By training on this large database, our convolutional neural network achieves 85% accuracy on the validation dataset. We tested the model and found it is robust against noise (down to PSNR of 10) and plural scattering (up to t/{\lambda} = 1). We further validated the model against spectral data that were not used in training.

Abstract (translated)

URL

https://arxiv.org/abs/2210.11657

PDF

https://arxiv.org/pdf/2210.11657.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot