Abstract
We present a policy optimization framework in which the learned policy comes with a machine-checkable certificate of adversarial robustness. Our approach, called CAROL, learns a model of the environment. In each learning iteration, it uses the current version of this model and an external abstract interpreter to construct a differentiable signal for provable robustness. This signal is used to guide policy learning, and the abstract interpretation used to construct it directly leads to the robustness certificate returned at convergence. We give a theoretical analysis that bounds the worst-case accumulative reward of CAROL. We also experimentally evaluate CAROL on four MuJoCo environments. On these tasks, which involve continuous state and action spaces, CAROL learns certified policies that have performance comparable to the (non-certified) policies learned using state-of-the-art robust RL methods.
Abstract (translated)
我们提出了一个策略优化框架,在该框架中,学习的策略附带了一个可机器检查的dversarial robust证明证书。我们的方法称为CAROL,它学习环境模型。在每个学习迭代中,它使用当前版本的模型和外部抽象解释器构建可证明的鲁棒性信号。这个信号用于指导策略学习,构建抽象解释器的结果直接导致了收敛时得到的鲁棒证明证书。我们提供了一种理论分析,以限制CAROL的最坏的累积奖励。我们还实验评估了CAROL在四个MUJoCo环境中的表现。在这些任务中,涉及连续状态和行动空间,CAROL学习的是证明过的证书策略,其性能与使用先进的鲁棒RL方法学习到的(未证明的)策略相当。
URL
https://arxiv.org/abs/2301.11374