Paper Reading AI Learner

Comparing LSTM-Based Sequence-to-Sequence Forecasting Strategies for 24-Hour Solar Proton Flux Profiles Using GOES Data

2025-10-06 21:45:37
Kangwoo Yi, Bo Shen, Qin Li, Haimin Wang, Yong-Jae Moon, Jaewon Lee, Hwanhee Lee

Abstract

Solar Proton Events (SPEs) cause significant radiation hazards to satellites, astronauts, and technological systems. Accurate forecasting of their proton flux time profiles is crucial for early warnings and mitigation. This paper explores deep learning sequence-to-sequence (seq2seq) models based on Long Short-Term Memory networks to predict 24-hour proton flux profiles following SPE onsets. We used a dataset of 40 well-connected SPEs (1997-2017) observed by NOAA GOES, each associated with a >=M-class western-hemisphere solar flare and undisturbed proton flux profiles. Using 4-fold stratified cross-validation, we evaluate seq2seq model configurations (varying hidden units and embedding dimensions) under multiple forecasting scenarios: (i) proton-only input vs. combined proton+X-ray input, (ii) original flux data vs. trend-smoothed data, and (iii) autoregressive vs. one-shot forecasting. Our major results are as follows: First, one-shot forecasting consistently yields lower error than autoregressive prediction, avoiding the error accumulation seen in iterative approaches. Second, on the original data, proton-only models outperform proton+X-ray models. However, with trend-smoothed data, this gap narrows or reverses in proton+X-ray models. Third, trend-smoothing significantly enhances the performance of proton+X-ray models by mitigating fluctuations in the X-ray channel. Fourth, while models trained on trendsmoothed data perform best on average, the best-performing model was trained on original data, suggesting that architectural choices can sometimes outweigh the benefits of data preprocessing.

Abstract (translated)

太阳质子事件(SPEs)对卫星、宇航员和技术系统造成显著的辐射危害。准确预测其质子通量的时间曲线对于早期预警和缓解措施至关重要。本文探讨了基于长短期记忆网络的深度学习序列到序列(seq2seq)模型,以预测在SPE发生后的24小时内的质子通量曲线。我们使用了一套由NOAA GOES观测的40个相互关联良好的SPE(1997-2017年间)的数据集,每个事件都与西部半球的大于M级太阳耀斑以及未受干扰的质子通量曲线相关联。 通过4折分层交叉验证,我们在多种预测情景下评估了seq2seq模型配置的不同参数设置(隐藏单元和嵌入维度变化),包括:(i) 单独使用质子输入与同时使用质子和X射线输入;(ii) 使用原始通量数据与经过趋势平滑处理的数据;以及 (iii) 自回归预测与一次性预测。我们的主要研究结果如下: 1. 一次性预测方法始终比自回归预测产生更低的误差,避免了迭代方法中的误差累积。 2. 在使用原始数据时,单独使用质子输入的模型优于同时使用质子和X射线输入的模型。然而,在使用趋势平滑后的数据时,后者的表现与前者差距缩小甚至超越前者。 3. 趋势平滑显著提升了质子加X射线模型的效果,减少了X射线通道中的波动性。 4. 尽管基于经过趋势平滑处理的数据训练出的模型平均性能最佳,但表现最好的模型是使用原始数据进行训练的结果。这表明架构选择有时可以克服预处理带来的好处。 这些发现对于提高太阳质子事件预测精度具有重要的实际意义,并且能够为未来的相关研究提供有价值的参考信息。

URL

https://arxiv.org/abs/2510.05399

PDF

https://arxiv.org/pdf/2510.05399.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot