Abstract
Existing low-light image enhancement (LLIE) and joint LLIE and deblurring (LLIE-deblur) models have made strides in addressing predefined degradations, yet they are often constrained by dynamically coupled degradations. To address these challenges, we introduce a Unified Receptance Weighted Key Value (URWKV) model with multi-state perspective, enabling flexible and effective degradation restoration for low-light images. Specifically, we customize the core URWKV block to perceive and analyze complex degradations by leveraging multiple intra- and inter-stage states. First, inspired by the pupil mechanism in the human visual system, we propose Luminance-adaptive Normalization (LAN) that adjusts normalization parameters based on rich inter-stage states, allowing for adaptive, scene-aware luminance modulation. Second, we aggregate multiple intra-stage states through exponential moving average approach, effectively capturing subtle variations while mitigating information loss inherent in the single-state mechanism. To reduce the degradation effects commonly associated with conventional skip connections, we propose the State-aware Selective Fusion (SSF) module, which dynamically aligns and integrates multi-state features across encoder stages, selectively fusing contextual information. In comparison to state-of-the-art models, our URWKV model achieves superior performance on various benchmarks, while requiring significantly fewer parameters and computational resources.
Abstract (translated)
现有的低光图像增强(LLIE)和联合低光图像增强与去模糊(LLIE-deblur)模型在解决预定义退化方面取得了显著进展,但它们通常受到动态耦合退化的限制。为了解决这些挑战,我们引入了一个具有多态视角的统一受体加权键值(URWKV)模型,该模型能够灵活有效地恢复低光图像中的降质情况。具体而言,我们将核心URWKV块定制化以感知和分析复杂的降质情况,并利用多个内部和跨阶段状态进行。 首先,受到人类视觉系统中瞳孔机制的启发,我们提出了一种基于丰富跨阶段状态调整归一化参数的亮度自适应归一化(LAN),它允许根据场景对亮度进行自适应调节。其次,通过指数移动平均方法聚合多个内部分阶段的状态,有效地捕捉细微的变化,并减少单一状态机制中固有的信息丢失。 为了降低传统跳过连接通常伴随的退化效应,我们提出了一个具有多态感知的选择性融合(SSF)模块,该模块能够在编码器阶段动态对齐和集成跨多级特征,选择性地融合上下文信息。与最先进的模型相比,我们的URWKV模型在各种基准测试中表现出优越性能的同时,还显著减少了所需的参数量和计算资源。 总结而言,通过引入具有创新机制的URWKV模型,我们旨在提供一种更灵活、更高效的方法来解决低光图像处理中的复杂退化问题。
URL
https://arxiv.org/abs/2505.23068