Abstract
The major components of any successful autonomous flight system are task completion and collision avoidance. Most deep learning algorithms are successful while executing these aspects under the environment and conditions in which they have been trained. However, they fail when subjected to novel environments. In this paper we present autonomous UAV flight using Deep Reinforcement Learning augmented with Self-Attention Models that can effectively reason when subjected to varying inputs. In addition to their reasoning ability, they also are interpretable which enables it to be used under real-world conditions. We have tested our algorithm under different weather and environments and found it to be robust compared to conventional Deep Reinforcement Learning algorithms.
Abstract (translated)
URL
https://arxiv.org/abs/2105.12254