Abstract
We develop the first end-to-end sample complexity of model-free policy gradient (PG) methods in discrete-time infinite-horizon Kalman filtering. Specifically, we introduce the receding-horizon policy gradient (RHPG-KF) framework and demonstrate $\tilde{\mathcal{O}}(\epsilon^{-2})$ sample complexity for RHPG-KF in learning a stabilizing filter that is $\epsilon$-close to the optimal Kalman filter. Notably, the proposed RHPG-KF framework does not require the system to be open-loop stable nor assume any prior knowledge of a stabilizing filter. Our results shed light on applying model-free PG methods to control a linear dynamical system where the state measurements could be corrupted by statistical noises and other (possibly adversarial) disturbances.
Abstract (translated)
我们开发了一种模型无关的政策梯度方法(PG)在离散时间无限期Kalman滤波中的端到端样本复杂性。具体来说,我们引入了回顾性无限期政策梯度框架(RHPG-KF),并证明了在学习一个稳定滤波,其与最优Kalman滤波$epsilon$附近的样本复杂性为$ ilde{mathcal{O}}(epsilon^{-2})$。值得注意的是,我们提出的RHPG-KF框架不需要系统是开环稳定的,也不假设稳定滤波的先前知识。我们的结果表明,使用模型无关的PG方法来控制一个线性动态系统,状态测量可能会被 statistical noises 和其他(可能是对抗性)扰动所污染。
URL
https://arxiv.org/abs/2301.12624