Abstract
Terrain-aware perception holds the potential to improve the robustness and accuracy of autonomous robot navigation in the wilds, thereby facilitating effective off-road traversals. However, the lack of multi-modal perception across various motion patterns hinders the solutions of Simultaneous Localization And Mapping (SLAM), especially when confronting non-geometric hazards in demanding landscapes. In this paper, we first propose a Terrain-Aware multI-modaL (TAIL) dataset tailored to deformable and sandy terrains. It incorporates various types of robotic proprioception and distinct ground interactions for the unique challenges and benchmark of multi-sensor fusion SLAM. The versatile sensor suite comprises stereo frame cameras, multiple ground-pointing RGB-D cameras, a rotating 3D LiDAR, an IMU, and an RTK device. This ensemble is hardware-synchronized, well-calibrated, and self-contained. Utilizing both wheeled and quadrupedal locomotion, we efficiently collect comprehensive sequences to capture rich unstructured scenarios. It spans the spectrum of scope, terrain interactions, scene changes, ground-level properties, and dynamic robot characteristics. We benchmark several state-of-the-art SLAM methods against ground truth and provide performance validations. Corresponding challenges and limitations are also reported. All associated resources are accessible upon request at \url{this https URL}.
Abstract (translated)
地形感知感知具有在野外提高自主机器人导航的稳健性和精度的潜力,从而促进有效穿越复杂地形。然而,各种运动模式下的多模态感知不足会阻碍同时定位与映射(SLAM)的解决方案,尤其是在面临具有挑战性的复杂地形时。在本文中,我们首先提出了一个专门针对变形和沙质地形的多模态(TAIL)数据集。它专门为机器人本体感知和独特的多传感器融合SLAM挑战和基准而设计。多样化的传感器套件包括双目立体相机、多个地面指向的RGB-D相机、旋转的3D激光雷达、IMU和实时定位与跟踪设备。该集成系统具有硬件同步、校准良好和自包含的特点。通过轮行和四足行走,我们有效地收集了全面的序列以捕捉丰富的非结构化场景。它涵盖了范围、地形交互、场景变化、地面级性质和动态机器人特征。我们还与最先进的SLAM方法进行了对比并提供了性能验证。同时,还报告了相应的挑战和限制。所有相关资源都可以通过请求的链接获取:<https://this https URL>。
URL
https://arxiv.org/abs/2403.16875