Abstract
Autonomous agile flight brings up fundamental challenges in robotics, such as coping with unreliable state estimation, reacting optimally to dynamically changing environments, and coupling perception and action in real time under severe resource constraints. In this paper, we consider these challenges in the context of autonomous, vision-based drone racing in dynamic environments. Our approach combines a convolutional neural network (CNN) with a state-of-the-art path-planning and control system. The CNN directly maps raw images into a robust representation in the form of a waypoint and desired speed. This information is then used by the planner to generate a short, minimum-jerk trajectory segment and corresponding motor commands to reach the desired goal. We demonstrate our method in autonomous agile flight scenarios, in which a vision-based quadrotor traverses drone-racing tracks with possibly moving gates. Our method does not require any explicit map of the environment and runs fully onboard. We extensively test the precision and robustness of the approach in simulation and in the physical world. We also evaluate our method against state-of-the-art navigation approaches and professional human drone pilots.
Abstract (translated)
自动敏捷飞行带来了机器人领域的基本挑战,例如应对不可靠的状态估计,对动态变化的环境做出最佳反应,以及在严重的资源约束下实时将感知和行为耦合起来。在本文中,我们考虑了在动态环境中自主,基于视觉的无人机赛车背景下的这些挑战。我们的方法将卷积神经网络(CNN)与最先进的路径规划和控制系统相结合。 CNN直接将原始图像以航点和期望速度的形式映射为健壮表示。计划者使用该信息生成短暂的最小加速度轨迹段和相应的电机命令以达到预期目标。我们在自主敏捷飞行场景中演示了我们的方法,其中基于视觉的四旋翼飞行器可能会移动大门穿越无人机赛道。我们的方法不需要任何明确的环境地图并在船上完全运行。我们在模拟和物理世界中广泛测试该方法的精度和鲁棒性。我们还评估我们的方法对付最先进的导航方法和专业人类无人驾驶飞行员。
URL
https://arxiv.org/abs/1806.08548