Abstract
In this article we present a novel underwater dataset collected from several field trials within the EU FP7 project "Cognitive autonomous diving buddy (CADDY)", where an Autonomous Underwater Vehicle (AUV) was used to interact with divers and monitor their activities. To our knowledge, this is one of the first efforts to collect a large dataset in underwater environments targeting object classification, segmentation and human pose estimation tasks. The first part of the dataset contains stereo camera recordings (~10K) of divers performing hand gestures to communicate and interact with an AUV in different environmental conditions. These gestures samples serve to test the robustness of object detection and classification algorithms against underwater image distortions i.e., color attenuation and light backscatter. The second part includes stereo footage (~12.7K) of divers free-swimming in front of the AUV, along with synchronized IMUs measurements located throughout the diver's suit (DiverNet) which serve as ground-truth for human pose and tracking methods. In both cases, these rectified images allow investigation of 3D representation and reasoning pipelines from low-texture targets commonly present in underwater scenarios. In this paper we describe our recording platform, sensor calibration procedure plus the data format and the utilities provided to use the dataset.
Abstract (translated)
在本文中,我们提出了一个新的水下数据集,该数据集来自欧盟FP7项目“认知自主潜水伙伴(CADDY)”中的几个现场试验,其中使用自主水下航行器(AUV)与潜水员互动并监控他们的活动。据我们所知,这是在水下环境中收集大型数据集的首批努力之一,目标是对象分类,分割和人体姿势估计任务。数据集的第一部分包含潜水员执行手势以在不同环境条件下与AUV进行通信和交互的立体摄像机记录(~10K)。这些手势样本用于测试物体检测和分类算法对水下图像失真的稳健性,即颜色衰减和光反向散射。第二部分包括在AUV前面潜水员自由游泳的立体声镜头(~12.7K),以及遍布潜水员服装(DiverNet)的同步IMU测量,其作为人体姿势和跟踪方法的地面实况。在这两种情况下,这些经过校正的图像可以调查来自水下情景中常见的低纹理目标的3D表示和推理管道。在本文中,我们描述了我们的记录平台,传感器校准程序以及数据格式和为使用数据集而提供的实用程序。
URL
https://arxiv.org/abs/1807.04856