Many real-world decision-making tasks, such as safety-critical scenarios, cannot be fully described in a single-objective setting using the Markov Decision Process (MDP) framework, as they include hard constraints. These can instead be modeled with additional cost functions within the Constrained Markov Decision Process (CMDP) framework. Even though CMDPs have been extensively studied in the Reinforcement Learning literature, little attention has been given to sampling-based planning algorithms such as MCTS for solving them. Previous approaches use Monte Carlo cost estimates to avoid constraint violations. However, these suffer from high variance which results in conservative performance with respect to costs. We propose Constrained MCTS (C-MCTS), an algorithm that estimates cost using a safety critic. The safety critic training is based on Temporal Difference learning in an offline phase prior to agent deployment. This critic limits the exploration of the search tree and removes unsafe trajectories within MCTS during deployment. C-MCTS satisfies cost constraints but operates closer to the constraint boundary, achieving higher rewards compared to previous work. As a nice byproduct, the planner is more efficient requiring fewer planning steps. Most importantly, we show that under model mismatch between the planner and the real world, our approach is less susceptible to cost violations than previous work.
许多现实世界的决策任务，如安全关键场景，无法在单一目标环境下使用Markov决策过程(MDP)框架完全描述，因为它们包含硬约束。这些可以而是在MDP框架内使用额外的成本函数建模。尽管MDP在 reinforcement learning 文献中已经广泛研究，但很少 attention 被给予了基于采样的计划算法，如 MCTS 来解决它们。以往的方法使用蒙特卡罗成本估计以避免约束违反。但是，这些受到高方差影响，导致在成本方面表现保守。我们建议使用 Constrained MCTS(C-MCTS)算法，这是一种使用安全批评估计成本的计划算法。安全批评训练基于在代理部署之前进行的离线阶段的时间差异学习。该批评限制了搜索树的探索，并在部署期间在 MCTS 内删除不安全的路径。C-MCTS满足了成本约束，但更接近约束边界，比以前的工作实现了更高的奖励。作为一种美好的结果，规划器更高效，需要更少的计划步骤。最重要的是，我们表明，在规划器和现实世界模型不匹配的情况下，我们的方法比以前的工作更容易违反成本。