Abstract
Visual inspection of underwater structures by vehicles, e.g. remotely operated vehicles (ROVs), plays an important role in scientific, military, and commercial sectors. However, the automatic extraction of information using software tools is hindered by the characteristics of water which degrade the quality of captured videos. As a contribution for restoring the color of underwater images, Underwater Denoising Autoencoder (UDAE) model is developed using a denoising autoencoder with U-Net architecture. The proposed network takes into consideration the accuracy and the computation cost to enable real-time implementation on underwater visual tasks using end-to-end autoencoder network. Underwater vehicles perception is improved by reconstructing captured frames; hence obtaining better performance in underwater tasks. Related learning methods use generative adversarial networks (GANs) to generate color corrected underwater images, and to our knowledge this paper is the first to deal with a single autoencoder capable of producing same or better results. Moreover, image pairs are constructed for training the proposed network, where it is hard to obtain such dataset from underwater scenery. At the end, the proposed model is compared to a state-of-the-art method.
Abstract (translated)
通过车辆(如遥控车辆)对水下结构进行目视检查,在科学、军事和商业领域发挥着重要作用。然而,利用软件工具自动提取信息受到水的特性的阻碍,水的特性会降低捕获视频的质量。为了恢复水下图像的颜色,利用U-NET结构的去噪自动编码器,建立了水下去噪自动编码器(UDAE)模型。该网络考虑了端到端自动编码网络的精度和计算成本,实现了水下视觉任务的实时实现。通过重建捕获的帧,提高了水下机器人的感知能力,从而在水下任务中获得更好的性能。相关的学习方法使用生成对抗网络(gans)生成彩色水下图像,据我们所知,本文是第一个处理能够产生相同或更好结果的单个自动编码器。此外,还构造了图像对,用于训练所提出的网络,很难从水下景色中获得这样的数据集。最后,将该模型与最新的方法进行了比较。
URL
https://arxiv.org/abs/1905.09000