Abstract
Augmented reality (AR) offers immersive interaction but remains inaccessible for users with motor impairments or limited dexterity due to reliance on precise input methods. This study proposes a gesture-based interaction system for AR environments, leveraging deep learning to recognize hand and body gestures from wearable sensors and cameras, adapting interfaces to user capabilities. The system employs vision transformers (ViTs), temporal convolutional networks (TCNs), and graph attention networks (GATs) for gesture processing, with federated learning ensuring privacy-preserving model training across diverse users. Reinforcement learning optimizes interface elements like menu layouts and interaction modes. Experiments demonstrate a 20% improvement in task completion efficiency and a 25% increase in user satisfaction for motor-impaired users compared to baseline AR systems. This approach enhances AR accessibility and scalability. Keywords: Deep learning, Federated learning, Gesture recognition, Augmented reality, Accessibility, Human-computer interaction
Abstract (translated)
增强现实(AR)提供了沉浸式的互动体验,但由于依赖于精确的输入方法,对于运动障碍或灵巧度受限的用户来说仍然难以使用。本研究提出了一种基于手势的交互系统,用于增强现实环境,该系统利用深度学习从穿戴式传感器和摄像头中识别手部和身体动作,并根据用户的实际能力调整界面。 该系统采用视觉变换器(ViT)、时间卷积网络(TCN)以及图注意力网络(GAT)处理手势数据。通过联邦学习在不同用户之间进行隐私保护的模型训练,确保了系统的可扩展性和安全性。此外,强化学习用于优化诸如菜单布局和互动模式等界面元素。 实验结果显示,在任务完成效率上,与基准增强现实系统相比,该系统使运动障碍用户的性能提高了20%,并且其满意度也增加了25%。这一方法显著增强了AR的可访问性及扩展性。 关键词:深度学习、联邦学习、手势识别、增强现实、无障碍设计、人机交互
URL
https://arxiv.org/abs/2506.15189