Abstract
In autonomous robot exploration tasks, a mobile robot needs to actively explore and map an unknown environment as fast as possible. Since the environment is being revealed during exploration, the robot needs to frequently re-plan its path online, as new information is acquired by onboard sensors and used to update its partial map. While state-of-the-art exploration planners are frontier- and sampling-based, encouraged by the recent development in deep reinforcement learning (DRL), we propose ARiADNE, an attention-based neural approach to obtain real-time, non-myopic path planning for autonomous exploration. ARiADNE is able to learn dependencies at multiple spatial scales between areas of the agent's partial map, and implicitly predict potential gains associated with exploring those areas. This allows the agent to sequence movement actions that balance the natural trade-off between exploitation/refinement of the map in known areas and exploration of new areas. We experimentally demonstrate that our method outperforms both learning and non-learning state-of-the-art baselines in terms of average trajectory length to complete exploration in hundreds of simplified 2D indoor scenarios. We further validate our approach in high-fidelity Robot Operating System (ROS) simulations, where we consider a real sensor model and a realistic low-level motion controller, toward deployment on real robots.
Abstract (translated)
在自主机器人探索任务中,一个移动机器人需要尽可能快地积极探索并绘制一个未知的环境。由于在探索过程中环境会发生变化,机器人需要经常在线重新规划路径,因为体内的传感器会获取新信息并用于更新其部分地图。虽然最先进的探索规划方法是基于 Frontier 和采样的,但最近的 Deep Reinforcement Learning (DRL) 发展鼓励我们提出 ARiADNE,它是一种基于注意力的神经网络方法,以获取实时的非直觉路径规划,为自主探索服务。ARiADNE能够在 agent 部分地图的区域之间的多个空间尺度上学习依赖关系,并隐含地预测探索这些区域的潜在收益。这使机器人能够顺序执行运动行动,平衡已知区域地图的利用/完善和探索新区域的自然权衡。我们实验结果表明,我们的方法和学习和非学习的最新基准在平均路径长度上相比,能够完成在数百个简化的 2D 室内场景的探索。我们进一步验证我们的方法在高保真机器人操作系统(ROS)模拟中,考虑了一个真实的传感器模型和一个现实的低级别运动控制器,以部署在真实机器人上。
URL
https://arxiv.org/abs/2301.11575