Abstract
Two difficulties here make low-light image enhancement a challenging task; firstly, it needs to consider not only luminance restoration but also image contrast, image denoising and color distortion issues simultaneously. Second, the effectiveness of existing low-light enhancement methods depends on paired or unpaired training data with poor generalization performance. To solve these difficult problems, we propose in this paper a new learning-based Retinex decomposition of zero-shot low-light enhancement method, called ZERRINNet. To this end, we first designed the N-Net network, together with the noise loss term, to be used for denoising the original low-light image by estimating the noise of the low-light image. Moreover, RI-Net is used to estimate the reflection component and illumination component, and in order to solve the color distortion and contrast, we use the texture loss term and segmented smoothing loss to constrain the reflection component and illumination component. Finally, our method is a zero-reference enhancement method that is not affected by the training data of paired and unpaired datasets, so our generalization performance is greatly improved, and in the paper, we have effectively validated it with a homemade real-life low-light dataset and additionally with advanced vision tasks, such as face detection, target recognition, and instance segmentation. We conducted comparative experiments on a large number of public datasets and the results show that the performance of our method is competitive compared to the current state-of-the-art methods. The code is available at:this https URL
Abstract (translated)
本文提出了一种新的基于学习的零散低光增强方法,称为ZERRINNet。为了解决这些问题,我们在论文中提出了一种新的基于学习的Retinex分解零散低光增强方法。首先,我们设计了一个N-Net网络,包括噪声损失项,用于通过估计低光图像的噪声来消除原始低光图像的噪声。此外,我们还使用了RI-Net来估计反射分量和支持向量,为了解决色彩失真和对比度问题,我们使用了纹理损失项和分割平滑损失来约束反射分量和光照分量。最后,我们的方法是一种零参考增强方法,不会受到成对和未成对数据集的训练数据的影响。在论文中,我们通过使用自己制作的现实低光数据集以及先进视觉任务(如面部检测、目标识别和实例分割)来有效验证了我们的方法。我们在多个公共数据集上进行了比较实验,结果表明,与最先进的现有方法相比,我们的方法具有竞争力的性能。代码可在此处下载:https://this URL。
URL
https://arxiv.org/abs/2311.02995