Abstract
To ensure the efficiency of robot autonomy under diverse real-world conditions, a high-quality heterogeneous dataset is essential to benchmark the operating algorithms' performance and robustness. Current benchmarks predominantly focus on urban terrains, specifically for on-road autonomous driving, leaving multi-degraded, densely vegetated, dynamic and feature-sparse environments, such as underground tunnels, natural fields, and modern indoor spaces underrepresented. To fill this gap, we introduce EnvoDat, a large-scale, multi-modal dataset collected in diverse environments and conditions, including high illumination, fog, rain, and zero visibility at different times of the day. Overall, EnvoDat contains 26 sequences from 13 scenes, 10 sensing modalities, over 1.9TB of data, and over 89K fine-grained polygon-based annotations for more than 82 object and terrain classes. We post-processed EnvoDat in different formats that support benchmarking SLAM and supervised learning algorithms, and fine-tuning multimodal vision models. With EnvoDat, we contribute to environment-resilient robotic autonomy in areas where the conditions are extremely challenging. The datasets and other relevant resources can be accessed through this https URL.
Abstract (translated)
为了确保机器人在各种现实条件下的自主性效率,高质量的异构数据集对于评估操作算法的性能和鲁棒性至关重要。当前的基准主要集中在城市地形上,特别是针对道路自动驾驶的应用,而多退化、植被密集、动态且特征稀疏的环境,如地下隧道、自然田野以及现代室内空间,则代表性不足。为填补这一空白,我们介绍了EnvoDat,这是一个在多种环境和条件下收集的大规模、多模态数据集,包括高照明度、雾气、雨天及不同时间段的零能见度情况。总体而言,EnvoDat包含了来自13个场景的26个序列,涵盖10种感知模式,超过1.9TB的数据量,并对82类以上的物体和地形进行了超过89K次精细多边形标注。我们还以不同的格式后处理了EnvoDat,支持SLAM算法和监督学习算法的基准测试以及多模态视觉模型的微调。通过EnvoDat,我们在条件极其恶劣的地方为环境适应性机器人自主性做出了贡献。数据集及相关资源可通过此链接访问:[https URL]。
URL
https://arxiv.org/abs/2410.22200