Abstract
Low-light image enhancement (LLIE) aims to improve illumination while preserving high-quality color and texture. However, existing methods often fail to extract reliable feature representations due to severely degraded pixel-level information under low-light conditions, resulting in poor texture restoration, color inconsistency, and artifact. To address these challenges, we propose LightQANet, a novel framework that introduces quantized and adaptive feature learning for low-light enhancement, aiming to achieve consistent and robust image quality across diverse lighting conditions. From the static modeling perspective, we design a Light Quantization Module (LQM) to explicitly extract and quantify illumination-related factors from image features. By enforcing structured light factor learning, LQM enhances the extraction of light-invariant representations and mitigates feature inconsistency across varying illumination levels. From the dynamic adaptation perspective, we introduce a Light-Aware Prompt Module (LAPM), which encodes illumination priors into learnable prompts to dynamically guide the feature learning process. LAPM enables the model to flexibly adapt to complex and continuously changing lighting conditions, further improving image enhancement. Extensive experiments on multiple low-light datasets demonstrate that our method achieves state-of-the-art performance, delivering superior qualitative and quantitative results across various challenging lighting scenarios.
Abstract (translated)
低光图像增强(LLIE)的目标是提升照明效果,同时保持高质量的颜色和纹理。然而,现有的方法往往由于在低光照条件下像素级信息严重退化而无法提取可靠的特征表示,导致较差的纹理恢复、颜色不一致以及伪影问题。为了应对这些挑战,我们提出了LightQANet,这是一种新颖的框架,引入了量化和自适应特性学习以改进低光图像增强,旨在实现跨多种照明条件的一致性和鲁棒性图像质量。 从静态建模的角度来看,我们设计了一个光照量化模块(LQM),用于明确地提取并量化与光照相关的因素。通过强制执行结构化光照因子学习,LQM增强了对光照不变表示的提取,并减少了在不同光照水平下的特征不一致性。 从动态适应性的角度来看,我们引入了一种光照感知提示模块(LAPM),它将光照先验编码为可学习的提示,以动态引导特性学习过程。通过这种方式,LAPM使得模型能够灵活地适应复杂且不断变化的照明条件,进一步提升图像增强效果。 在多个低光数据集上的广泛实验表明,我们的方法达到了最先进的性能,在各种具有挑战性的照明场景中提供了优越的质量和定量结果。
URL
https://arxiv.org/abs/2510.14753