Abstract
To align mobile robot navigation policies with user preferences through reinforcement learning from human feedback (RLHF), reliable and behavior-diverse user queries are required. However, deterministic policies fail to generate a variety of navigation trajectory suggestions for a given navigation task configuration. We introduce EnQuery, a query generation approach using an ensemble of policies that achieve behavioral diversity through a regularization term. For a given navigation task, EnQuery produces multiple navigation trajectory suggestions, thereby optimizing the efficiency of preference data collection with fewer queries. Our methodology demonstrates superior performance in aligning navigation policies with user preferences in low-query regimes, offering enhanced policy convergence from sparse preference queries. The evaluation is complemented with a novel explainability representation, capturing full scene navigation behavior of the mobile robot in a single plot.
Abstract (translated)
通过人反馈强化学习(RLHF)将移动机器人导航策略与用户偏好对齐,需要可靠且行为多样化的用户查询。然而,确定性策略无法为给定导航任务配置生成多样化的导航轨迹建议。我们引入了EnQuery,一种使用策略集合生成的查询方法,通过正则化项实现行为多样性。对于给定的导航任务,EnQuery生成多个导航轨迹建议,从而通过更少的查询优化偏好数据的收集效率。我们的方法在低查询模式下表现出优越的将导航策略与用户偏好对齐的能力,提供来自稀疏偏好查询的增强策略收敛。评估中还补充了一个新的可解释性表示,捕捉了移动机器人在单个图中的完整场景导航行为。
URL
https://arxiv.org/abs/2404.04852