Abstract
A major obstacle to the development of effective monocular depth estimation algorithms is the difficulty in obtaining high-quality depth data that corresponds to collected RGB images. Collecting this data is time-consuming and costly, and even data collected by modern sensors has limited range or resolution, and is subject to inconsistencies and noise. To combat this, we propose a method of data generation in simulation using 3D synthetic environments and CycleGAN domain transfer. We compare this method of data generation to the popular NYUDepth V2 dataset by training a depth estimation model based on the DenseDepth structure using different training sets of real and simulated data. We evaluate the performance of the models on newly collected images and LiDAR depth data from a Husky robot to verify the generalizability of the approach and show that GAN-transformed data can serve as an effective alternative to real-world data, particularly in depth estimation.
Abstract (translated)
发展有效的单目深度估计算法的一个主要障碍是获取与收集的RGB图像相高质量的深度数据困难。收集这种数据耗时且代价高昂,即使是现代传感器的数据也无法提供高分辨率或高范围。数据存在不统一和噪声,因此我们提出了使用3D合成环境和循环GAN领域转换方法进行数据生成的方法。我们通过基于DenseDepth结构的深度估计模型对不同真实和模拟数据集进行训练,比较这种数据生成方法与NYUDepth V2数据集的性能。我们在Husky机器人上评估了模型在新收集的图像和来自模拟数据的LiDAR深度数据上的性能,以验证这种方法的可扩展性,并表明经过GAN变换的数据可以作为现实世界数据的有效替代,尤其是在深度估计方面。
URL
https://arxiv.org/abs/2405.01113