Paper Reading AI Learner

A Learnable Wavelet Transformer for Long-Short Equity Trading and Risk-Adjusted Return Optimization

2026-01-19 22:41:31
Shuozhe Li, Du Cheng, Leqi Liu

Abstract

Learning profitable intraday trading policies from financial time series is challenging due to heavy noise, non-stationarity, and strong cross-sectional dependence among related assets. We propose \emph{WaveLSFormer}, a learnable wavelet-based long-short Transformer that jointly performs multi-scale decomposition and return-oriented decision learning. Specifically, a learnable wavelet front-end generates low-/high-frequency components via an end-to-end trained filter bank, guided by spectral regularizers that encourage stable and well-separated frequency bands. To fuse multi-scale information, we introduce a low-guided high-frequency injection (LGHI) module that refines low-frequency representations with high-frequency cues while controlling training stability. The model outputs a portfolio of long/short positions that is rescaled to satisfy a fixed risk budget, and is optimized directly with a trading objective and risk-aware regularization. Extensive experiments on five years of hourly data across six industry groups, evaluated over ten random seeds, demonstrate that WaveLSFormer consistently outperforms MLP, LSTM and Transformer backbones, with and without fixed discrete wavelet front-ends. On average in all industries, WaveLSFormer achieves a cumulative overall strategy return of $0.607 \pm 0.045$ and a Sharpe ratio of $2.157 \pm 0.166$, substantially improving both profitability and risk-adjusted returns over the strongest baselines.

Abstract (translated)

从金融时间序列中学习具有盈利能力的日内交易策略是一项挑战,因为存在大量噪声、非平稳性和相关资产间的强横截面依赖。我们提出了一种名为**WaveLSFormer**的方法,这是一种基于可学习小波的长短期Transformer模型,它可以同时进行多尺度分解和以收益为导向的决策学习。 具体来说,WaveLSFormer通过一个端到端训练的小波滤波器组生成低频/高频分量,并使用谱正则化器来鼓励稳定的、间隔良好的频率带。为了融合多尺度信息,我们引入了一个低频引导高频注入(LGHI)模块,该模块在控制训练稳定性的同时用高频提示改进低频表示。 模型输出一个包含长短仓位置的资产组合,并对其进行调整以满足固定的风控预算,在直接优化交易目标和风险感知正则化的基础上进行学习。在五个年度每小时数据集上进行了广泛的实验,包括六个行业组,通过十个随机种子评估结果表明,WaveLSFormer在所有行业中均一致优于MLP、LSTM和Transformer骨干模型,无论是否使用固定离散小波前端。在整个行业中,平均而言,WaveLSFormer实现了0.607±0.045的累计策略总收益以及2.157±0.166的夏普比率,在利润性和风险调整后的回报方面相对于最强基线都有显著改善。

URL

https://arxiv.org/abs/2601.13435

PDF

https://arxiv.org/pdf/2601.13435.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot