Abstract
When taking photos in dim-light environments, due to the small amount of light entering, the shot images are usually extremely dark, with a great deal of noise, and the color cannot reflect real-world color. Under this condition, the traditional methods used for single image denoising have always failed to be effective. One common idea is to take multiple frames of the same scene to enhance the signal-to-noise ratio. This paper proposes a recurrent fully convolutional network (RFCN) to process burst photos taken under extremely low-light conditions, and to obtain denoised images with improved brightness. Our model maps raw burst images directly to sRGB outputs, either to produce a best image or to generate a multi-frame denoised image sequence. This process has proven to be capable of accomplishing the low-level task of denoising, as well as the high-level task of color correction and enhancement, all of which is end-to-end processing through our network. Our method has achieved better results than state-of-the-art methods. In addition, we have applied the model trained by one type of camera without fine-tuning on photos captured by different cameras and have obtained similar end-to-end enhancements.
Abstract (translated)
在光线暗淡的环境中拍照时,由于光线进入量小,拍摄的图像通常非常暗,噪音大,颜色无法反映真实世界的颜色。在这种情况下,传统的单图像去噪方法往往效果不佳。一个常见的想法是采用同一场景的多帧来提高信噪比。本文提出了一种循环全卷积网络(RFCN),用于处理在极低光照条件下拍摄的突发照片,并获得亮度提高的去噪图像。我们的模型将原始突发图像直接映射到sRGB输出,以生成最佳图像或生成多帧去噪图像序列。这一过程已被证明能够完成去噪的低级任务,以及颜色校正和增强的高级任务,所有这些都是通过我们的网络进行端到端处理。我们的方法比最先进的方法取得了更好的效果。此外,我们还应用了一种相机训练的模型,对不同相机拍摄的照片不进行微调,并获得了类似的端到端增强。
URL
https://arxiv.org/abs/1904.07483