Abstract
Exoskeletons for daily use by those with mobility impairments are being developed. They will require accurate and robust scene understanding systems. Current research has used vision to identify immediate terrain and geometric obstacles, however these approaches are constrained to detections directly in front of the user and are limited to classifying a finite range of terrain types (e.g., stairs, ramps and level-ground). This paper presents Exosense, a vision-centric scene understanding system which is capable of generating rich, globally-consistent elevation maps, incorporating both semantic and terrain traversability information. It features an elastic Atlas mapping framework associated with a visual SLAM pose graph, embedded with open-vocabulary room labels from a Vision-Language Model (VLM). The device's design includes a wide field-of-view (FoV) fisheye multi-camera system to mitigate the challenges introduced by the exoskeleton walking pattern. We demonstrate the system's robustness to the challenges of typical periodic walking gaits, and its ability to construct accurate semantically-rich maps in indoor settings. Additionally, we showcase its potential for motion planning -- providing a step towards safe navigation for exoskeletons.
Abstract (translated)
为那些行动不便的人开发了一种可日常使用的外骨骼。它们需要准确且可靠的场景理解系统。目前的研究已经利用视觉来识别立即的地形和几何障碍,然而这些方法仅限于在用户前直接检测到,并且局限于对有限范围的地面类型(如楼梯、斜坡和水平地面)进行分类。本文介绍了Exosense,一种以视觉为核心场景理解系统,能够生成丰富、全球一致的地形图,同时包含语义和地形可穿越信息。它采用了一个具有视觉SLAM姿态图的弹性的Atlas映射框架,附带从Vision-Language Model (VLM) 中的开放式词汇房间标签嵌入的房间标签。设备的设计包括一个广角鱼眼多相机系统,以减轻由外骨骼步行模式带来的挑战。我们展示了系统对典型周期性步行姿态的鲁棒性以及其在室内环境中的准确语义丰富地图的构建能力。此外,我们还展示了它在运动规划方面的潜力——为外骨骼的 safe navigation 迈出一步。
URL
https://arxiv.org/abs/2403.14320