Abstract
Ultra-high dynamic range (UHDR) scenes exhibit significant exposure disparities between bright and dark regions. Such conditions are commonly encountered in nighttime scenes with light sources. Even with standard exposure settings, a bimodal intensity distribution with boundary peaks often emerges, making it difficult to preserve both highlight and shadow details simultaneously. RGB-based bracketing methods can capture details at both ends using short-long exposure pairs, but are susceptible to misalignment and ghosting artifacts. We found that a short-exposure image already retains sufficient highlight detail. The main challenge of UHDR reconstruction lies in denoising and recovering information in dark regions. In comparison to the RGB images, RAW images, thanks to their higher bit depth and more predictable noise characteristics, offer greater potential for addressing this challenge. This raises a key question: can we learn to see everything in UHDR scenes using only a single short-exposure RAW image? In this study, we rely solely on a single short-exposure frame, which inherently avoids ghosting and motion blur, making it particularly robust in dynamic scenes. To achieve that, we introduce UltraLED, a two-stage framework that performs exposure correction via a ratio map to balance dynamic range, followed by a brightness-aware RAW denoiser to enhance detail recovery in dark regions. To support this setting, we design a 9-stop bracketing pipeline to synthesize realistic UHDR images and contribute a corresponding dataset based on diverse scenes, using only the shortest exposure as input for reconstruction. Extensive experiments show that UltraLED significantly outperforms existing single-frame approaches. Our code and dataset are made publicly available at this https URL.
Abstract (translated)
超高的动态范围(UHDR)场景表现出明亮区域和黑暗区域之间的显著曝光差异。这种情况通常出现在夜间有光源的场景中。即使采用标准曝光设置,也会出现具有边界峰值的双峰强度分布,这使得同时保留高光和阴影细节变得困难。基于RGB的方法通过使用短曝光与长曝光成对拍摄可以捕捉两端的细节,但这种方法容易产生错位和鬼影效应。我们发现,一张短曝光图像已经能够保留足够的高光细节。UHDR重建的主要挑战在于在暗区进行去噪并恢复信息。相比于RGB图像,RAW图像由于其更高的比特深度以及更可预测的噪声特性,在解决这一问题上具有更大的潜力。这引出了一个关键的问题:我们能否仅通过一张短曝光RAW图像就能看清超高清场景中的所有细节? 在这项研究中,我们只依赖于一张单帧短曝光图像,这种方法本质上避免了鬼影和运动模糊,并且在动态场景下特别稳健。为了实现这一点,我们引入了一个名为UltraLED的两阶段框架,该框架通过比例图进行曝光校正以平衡动态范围,然后使用亮度感知RAW去噪器来增强暗区细节恢复。 为此,我们设计了一个9级曝光调节管线来合成真实的UHDR图像,并贡献了一套基于多种场景的对应数据集,在重建时仅使用最短曝光作为输入。大量的实验表明,UltraLED在现有的单帧方法中表现出了显著的优势。我们的代码和数据集可在以下链接公开获取:[此网址](请将[此网址]替换为实际提供的URL)。
URL
https://arxiv.org/abs/2510.07741