Abstract
Effective reinforcement learning (RL) for sepsis treatment depends on learning stable, clinically meaningful state representations from irregular ICU time series. While previous works have explored representation learning for this task, the critical challenge of training instability in sequential representations and its detrimental impact on policy performance has been overlooked. This work demonstrates that Controlled Differential Equations (CDE) state representation can achieve strong RL policies when two key factors are met: (1) ensuring training stability through early stopping or stabilization methods, and (2) enforcing acuity-aware representations by correlation regularization with clinical scores (SOFA, SAPS-II, OASIS). Experiments on the MIMIC-III sepsis cohort reveal that stable CDE autoencoder produces representations strongly correlated with acuity scores and enables RL policies with superior performance (WIS return $> 0.9$). In contrast, unstable CDE representation leads to degraded representations and policy failure (WIS return $\sim$ 0). Visualizations of the latent space show that stable CDEs not only separate survivor and non-survivor trajectories but also reveal clear acuity score gradients, whereas unstable training fails to capture either pattern. These findings highlight practical guidelines for using CDEs to encode irregular medical time series in clinical RL, emphasizing the need for training stability in sequential representation learning.
Abstract (translated)
有效的强化学习(RL)用于治疗脓毒症,取决于从不规则的ICU时间序列中学习出稳定且具有临床意义的状态表示。尽管之前的工作已经探索了针对此任务的表示学习方法,但是关于顺序表征训练中的不稳定性和其对策略性能的负面影响这一关键挑战被忽视了。本研究证明,当满足两个重要因素时,控制微分方程(CDE)状态表示可以实现强大的RL策略:(1) 通过提前停止或稳定化方法确保训练稳定性;(2) 强制实施针对病情严重程度意识的表征,通过与临床评分(SOFA、SAPS-II、OASIS)的相关性正则化。在MIMIC-III脓毒症队列上的实验表明,稳定的CDE自编码器可以产生强烈相关于病情严重度评分的状态表示,并能够实现性能更优的RL策略(WIS回报>0.9)。相比之下,不稳定的CDE表征会导致表现下降和策略失败(WIS回报≈0)。对潜在空间的可视化显示,稳定CDE不仅能区分存活者与非存活者的轨迹,还能揭示明确的病情评分梯度,而不稳定的训练无法捕捉到这些模式。这些发现强调了在临床RL中使用CDE编码不规则医疗时间序列的实际指南,并突出了顺序表示学习中的训练稳定性需求的重要性。
URL
https://arxiv.org/abs/2506.15019