Paper Reading AI Learner

One-Shot Price Forecasting with Covariate-Guided Experts under Privacy Constraints

2026-01-17 09:13:57
Ren He (Tsinghua University), Yinliang Xu (Tsinghua University), Jinfeng Wang (Guangdong Power Grid Co.), Jeremy Watson (University of Canterbury), Jian Song (Tsinghua University)

Abstract

Forecasting in power systems often involves multivariate time series with complex dependencies and strict privacy constraints across regions. Traditional forecasting methods require significant expert knowledge and struggle to generalize across diverse deployment scenarios. Recent advancements in pre-trained time series models offer new opportunities, but their zero-shot performance on domain-specific tasks remains limited. To address these challenges, we propose a novel MoE Encoder module that augments pretrained forecasting models by injecting a sparse mixture-of-experts layer between tokenization and encoding. This design enables two key capabilities: (1) trans forming multivariate forecasting into an expert-guided univariate task, allowing the model to effectively capture inter-variable relations, and (2) supporting localized training and lightweight parameter sharing in federated settings where raw data cannot be exchanged. Extensive experiments on public multivariate datasets demonstrate that MoE-Encoder significantly improves forecasting accuracy compared to strong baselines. We further simulate federated environments and show that transferring only MoE-Encoder parameters allows efficient adaptation to new regions, with minimal performance degradation. Our findings suggest that MoE-Encoder provides a scalable and privacy-aware extension to foundation time series models.

Abstract (translated)

电力系统中的预测通常涉及多变量时间序列,这些序列具有复杂的依赖关系,并且在不同地区之间存在严格的隐私约束。传统的预测方法需要大量的专业知识,并且难以泛化到各种部署场景中。最近,在预训练的时间序列模型方面取得了进展,这为预测任务提供了新的机会,但它们在特定领域的零样本性能仍然有限。 为了应对这些挑战,我们提出了一种新颖的MoE(混合专家)编码器模块,该模块通过在标记化和编码之间注入一个稀疏的混合专家层来增强预训练的预测模型。这种设计实现了两个关键功能:(1) 将多变量预测任务转化为由专家指导的单变量任务,从而使模型能够有效地捕捉变量之间的关系;(2) 支持联邦设置中的本地化培训和轻量级参数共享,在此情况下原始数据不能交换。 在公共多变量数据集上的广泛实验表明,与强大的基线相比,MoE-Encoder显著提高了预测准确性。我们进一步模拟了联邦环境,并展示了仅传输MoE-Encoder的参数就可以高效地适应新区域,同时保持性能下降最小化。我们的发现表明,MoE-Encoder为基础时间序列模型提供了一个可扩展且隐私意识增强的扩展方式。

URL

https://arxiv.org/abs/2601.11977

PDF

https://arxiv.org/pdf/2601.11977.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot