Abstract
Most deraining works focus on rain streaks removal but they cannot deal adequately with heavy rain images. In heavy rain, streaks are strongly visible, dense rain accumulation or rain veiling effect significantly washes out the image, further scenes are relatively more blurry, etc. In this paper, we propose a novel method to address these problems. We put forth a 2-stage network: a physics-based backbone followed by a depth-guided GAN refinement. The first stage estimates the rain streaks, the transmission, and the atmospheric light governed by the underlying physics. To tease out these components more reliably, a guided filtering framework is used to decompose the image into its low- and high-frequency components. This filtering is guided by a rain-free residue image --- its content is used to set the passbands for the two channels in a spatially-variant manner so that the background details do not get mixed up with the rain-streaks. For the second stage, the refinement stage, we put forth a depth-guided GAN to recover the background details failed to be retrieved by the first stage, as well as correcting artefacts introduced by that stage. We have evaluated our method against the state of the art methods. Extensive experiments show that our method outperforms them on real rain image data, recovering visually clean images with good details.
Abstract (translated)
大多数的去雨工作都集中在去除雨条纹上,但它们不能充分处理暴雨图像。在大雨中,条纹清晰可见,密集的降雨积累或雨幕效应显著地冲蚀了图像,进一步的场景更加模糊等。本文提出了一种解决这些问题的新方法。我们提出了一个两阶段的网络:一个基于物理的主干,接着是深度引导的氮化镓细化。第一阶段是根据基本的物理原理来估算降雨条纹、传播和大气光。为了更可靠地梳理出这些组件,使用了一个引导过滤框架将图像分解为其低频和高频组件。这个过滤是由一个无雨的残留图像引导的---它的内容用于以空间变化的方式设置两个通道的通带,这样背景细节就不会与雨条纹混淆。在第二阶段,即精细化阶段,我们提出了一个深度引导的gan,以恢复第一阶段未能检索到的背景细节,以及纠正该阶段引入的人工制品。我们根据最先进的方法对我们的方法进行了评估。大量实验表明,该方法在真实雨量图像数据上优于传统方法,恢复了图像的视觉清晰性和细节性。
URL
https://arxiv.org/abs/1904.05050