Abstract
Raw underwater images are degraded due to wavelength dependent light attenuation and scattering, limiting their applicability in vision systems. Another factor that makes enhancing underwater images particularly challenging is the diversity of the water types in which they are captured. For example, images captured in deep oceanic waters have a different distribution from those captured in shallow coastal waters. Such diversity makes it hard to train a single model to enhance underwater images. In this work, we propose a novel model which nicely handles the diversity of water during the enhancement, by adversarially learning the content features of the images by disentangling the unwanted nuisances corresponding to water types (viewed as different domains). We use the learned domain agnostic features to generate enhanced underwater images. We train our model on a dataset consisting images of 10 Jerlov water types. Experimental results show that the proposed model not only outperforms the previous methods in SSIM and PSNR scores for almost all Jerlov water types but also generalizes well on real-world datasets. The performance of a high-level vision task (object detection) also shows improvement using enhanced images with our model.
Abstract (translated)
原始水下图像由于波长依赖的光衰减和散射而退化,限制了其在视觉系统中的适用性。另一个使增强水下图像特别具有挑战性的因素是捕获它们的水类型的多样性。例如,在深海中捕获的图像与在浅水中捕获的图像具有不同的分布。这种多样性使得训练单个模型来增强水下图像变得困难。在这项工作中,我们提出了一个新的模型,它可以很好地处理增强过程中的水的多样性,通过对与水类型(视为不同的域)相对应的不必要的干扰的分离,逆反地学习图像的内容特征。我们使用所学的领域不可知论特征来生成增强的水下图像。我们在一个包含10种杰洛夫水类型图像的数据集上训练我们的模型。实验结果表明,所提出的模型不仅在几乎所有杰洛夫水型的SSIM和PSNR评分方面优于以前的方法,而且在实际数据集上具有很好的通用性。高水平视觉任务(目标检测)的性能也显示了使用我们的模型增强图像的改进。
URL
https://arxiv.org/abs/1905.13342