Abstract
As technological advancements continue to expand the capabilities of multi unmanned-aerial-vehicle systems (mUAV), human operators face challenges in scalability and efficiency due to the complex cognitive load and operations associated with motion adjustments and team coordination. Such cognitive demands limit the feasible size of mUAV teams and necessitate extensive operator training, impeding broader adoption. This paper developed a Hand Gesture Based Interactive Control (HGIC), a novel interface system that utilize computer vision techniques to intuitively translate hand gestures into modular commands for robot teaming. Through learning control models, these commands enable efficient and scalable mUAV motion control and adjustments. HGIC eliminates the need for specialized hardware and offers two key benefits: 1) Minimal training requirements through natural gestures; and 2) Enhanced scalability and efficiency via adaptable commands. By reducing the cognitive burden on operators, HGIC opens the door for more effective large-scale mUAV applications in complex, dynamic, and uncertain scenarios. HGIC will be open-sourced after the paper being published online for the research community, aiming to drive forward innovations in human-mUAV interactions.
Abstract (translated)
随着技术的进步,多旋翼无人车辆(mUAV)的功能不断扩展,人类操作员面临由于运动调整和团队协调所带来的复杂认知负荷和操作挑战。这种认知要求限制了mUAV团队的规模,需要进行广泛的操作员培训,从而阻碍了更广泛的采用。本文开发了一种基于手势的交互式控制(HGIC),一种新的人机交互界面系统,利用计算机视觉技术将手势自然地转换为机器人协同操作的模块化指令。通过学习控制模型,这些指令使mUAV运动控制和调整变得高效且可扩展。HGIC消除了需要专门硬件的培训要求,提供了两个关键优势:(1)通过自然手势实现最小培训;(2)通过可调整的指令实现增强的可扩展性和效率。通过减轻操作员的认知负担,HGIC为在复杂、动态和不确定的场景中实现更有效的mUAV应用打开了大门。HGIC将在论文在线发表后开源,以推动人机mUAV交互领域的研究创新。
URL
https://arxiv.org/abs/2403.05478