Paper Reading AI Learner

DI-Retinex: Digital-Imaging Retinex Theory for Low-Light Image Enhancement

2024-04-04 09:53:00
Shangquan Sun, Wenqi Ren, Jingyang Peng, Fenglong Song, Xiaochun Cao

Abstract

Many existing methods for low-light image enhancement (LLIE) based on Retinex theory ignore important factors that affect the validity of this theory in digital imaging, such as noise, quantization error, non-linearity, and dynamic range overflow. In this paper, we propose a new expression called Digital-Imaging Retinex theory (DI-Retinex) through theoretical and experimental analysis of Retinex theory in digital imaging. Our new expression includes an offset term in the enhancement model, which allows for pixel-wise brightness contrast adjustment with a non-linear mapping function. In addition, to solve the lowlight enhancement problem in an unsupervised manner, we propose an image-adaptive masked reverse degradation loss in Gamma space. We also design a variance suppression loss for regulating the additional offset term. Extensive experiments show that our proposed method outperforms all existing unsupervised methods in terms of visual quality, model size, and speed. Our algorithm can also assist downstream face detectors in low-light, as it shows the most performance gain after the low-light enhancement compared to other methods.

Abstract (translated)

许多基于Retinex理论的低光图像增强(LLIE)方法忽略了数字图像中影响该理论有效性的重要因素,如噪声、量化误差、非线性以及动态范围溢出。在本文中,我们通过数字图像中Retinex理论的分析,提出了一个新的表达式称为数字图像Retinex理论(DI-Retinex)。我们的新表达式包括增强模型的偏移项,允许非线性映射函数对每个像素进行逐点亮度对比调整。此外,为了以无监督的方式解决低光增强问题,我们提出了在Gamma空间中的图像自适应掩码反向退化损失。我们还设计了一个用于调节附加偏移项的方差抑制损失。大量的实验结果表明,与所有现有无监督方法相比,我们的方法在视觉质量、模型大小和速度方面都表现出色。此外,我们的算法还可以帮助下游面部检测在低光环境中获得更好的性能,因为与其它方法相比,在低光增强后,其性能提升最为显著。

URL

https://arxiv.org/abs/2404.03327

PDF

https://arxiv.org/pdf/2404.03327.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot