Abstract
In this paper, we present a framework for real-time autonomous robot navigation based on cloud and on-demand databases to address two major issues of human-like robot interaction and task planning in global dynamic environment, which is not known a priori. Our framework contributes to make human-like brain GPS mapping system for robot using spatial information and performs 3D visual semantic SLAM for independent robot navigation. We accomplish the feat by separating robot's memory system into Long-Term Memory (LTM) and Short-Term Memory (STM). We also form robot's behavior and knowledge system by linking these memories to Autonomous Navigation Module (ANM), Learning Module (LM), and Behavior Planner Module (BPM). The proposed framework is assessed through simulation using ROS-based Gazebo-simulated mobile robot, RGB-D camera (3D sensor) and a laser range finder (2D sensor) in 3D model of realistic indoor environment. Simulation corroborates the substantial practical merit of our proposed framework.
Abstract (translated)
本文提出了一种基于云数据库和按需数据库的机器人实时自主导航框架,解决了全球动态环境下类人机器人交互和任务规划的两个主要问题,目前尚不清楚这两个问题的先验性。我们的框架有助于利用空间信息制作人脑GPS地图系统,并对独立的机器人导航进行三维视觉语义冲击。我们通过将机器人的记忆系统分为长期记忆(LTM)和短期记忆(STM)来完成这一壮举。我们还将这些记忆与自主导航模块(ANM)、学习模块(LM)和行为规划模块(BPM)联系起来,形成机器人的行为和知识系统。利用基于ROS的Gazebo模拟移动机器人、RGB-D相机(3D传感器)和激光测距仪(2D传感器)在真实室内环境的3D模型中进行了仿真评估。仿真证实了我们提出的框架的实际价值。
URL
https://arxiv.org/abs/1905.12942