Abstract
Micro-drones can be integrated into various industrial applications but are constrained by their computing power and expert pilots, a secondary challenge. This study presents a computationally-efficient deep convolutional neural network that utilizes Gabor filters and spatial separable convolutions with low computational complexities. An attention module is integrated with the model to complement the performance. Further, perception-based action space and trajectory generators are integrated with the model's predictions for intuitive navigation. The computationally-efficient model aids a human operator in controlling a micro-drone via gestures. Nearly 18% of computational resources are conserved using the NVIDIA GPU profiler during training. Using a low-cost DJI Tello drone for experiment verification, the computationally-efficient model shows promising results compared to a state-of-the-art and conventional computer vision-based technique.
Abstract (translated)
微型无人机可以将其集成到各种工业应用中,但同时也受到其计算能力和专家飞行员的限制,这是一个 secondary 挑战。本研究提出了一种计算高效的深度学习神经网络,使用折线图滤波器和低计算复杂性的空间卷积。一个注意力模块与模型相结合,以补充性能。此外,基于感知的行动空间和路径生成器与模型的预测相结合,以实现直觉导航。计算高效的模型帮助人类操作员通过手势控制微型无人机。在训练期间,使用 NVIDIA GPU Profiler 节省近 18 % 的计算资源。使用成本较低的 DJI Tello 无人机进行实验验证,计算高效的模型表现出与最先进的计算机视觉技术相比 promising 的结果。
URL
https://arxiv.org/abs/2301.12470