Abstract
Developing effective approaches to generate enhanced results that align well with human visual preferences for high-quality well-lit images remains a challenge in low-light image enhancement (LLIE). In this paper, we propose a human-in-the-loop LLIE training framework that improves the visual quality of unsupervised LLIE model outputs through iterative training stages, named HiLLIE. At each stage, we introduce human guidance into the training process through efficient visual quality annotations of enhanced outputs. Subsequently, we employ a tailored image quality assessment (IQA) model to learn human visual preferences encoded in the acquired labels, which is then utilized to guide the training process of an enhancement model. With only a small amount of pairwise ranking annotations required at each stage, our approach continually improves the IQA model's capability to simulate human visual assessment of enhanced outputs, thus leading to visually appealing LLIE results. Extensive experiments demonstrate that our approach significantly improves unsupervised LLIE model performance in terms of both quantitative and qualitative performance. The code and collected ranking dataset will be available at this https URL.
Abstract (translated)
开发有效的低光图像增强(LLIE)方法,以生成符合人类视觉偏好的高质量、光照良好的图片仍然是一个挑战。本文提出了一种人机交互式的低光图像增强训练框架HiLLIE,在这种框架下通过迭代训练阶段来改进无监督LLIE模型输出的视觉质量。在每个阶段中,我们通过高效的视觉质量注释将人的指导引入到训练过程中。随后,我们采用定制化的图像质量评估(IQA)模型学习从获得的标签中编码的人类视觉偏好,并将其用于引导增强模型的训练过程。仅需在每个阶段提供少量成对排名注释,我们的方法就能持续提升IQA模型模拟人类对增强输出视觉评估的能力,从而产生令人满意的LLIE结果。大量的实验表明,我们的方法显著提高了无监督LLIE模型在定量和定性性能方面的表现。代码和收集的排序数据集可在[此链接](https://this https URL)获取。
URL
https://arxiv.org/abs/2505.02134