Abstract
This paper proposes the first non-flow-based deep framework for high dynamic range (HDR) imaging of dynamic scenes with large-scale foreground motions. In state-of-the-art deep HDR imaging, input images are first aligned using optical flows before merging, which are still error-prone due to occlusion and large motions. In stark contrast to flow-based methods, we formulate HDR imaging as an image translation problem without optical flows. Moreover, our simple translation network can automatically hallucinate plausible HDR details in the presence of total occlusion, saturation and under-exposure, which are otherwise almost impossible to recover by conventional optimization approaches. Our framework can also be extended for different reference images. We performed extensive qualitative and quantitative comparisons to show that our approach produces excellent results where color artifacts and geometric distortions are significantly reduced compared to existing state-of-the-art methods, and is robust across various inputs, including images without radiometric calibration.
Abstract (translated)
本文提出了第一个基于非流动的深度框架,用于具有大规模前景运动的动态场景的高动态范围(HDR)成像。在最先进的深度HDR成像中,输入图像首先使用合并前的光流进行对准,由于遮挡和大的运动,这仍然容易出错。与基于流的方法形成鲜明对比的是,我们将HDR成像制定为没有光流的图像转换问题。此外,我们简单的翻译网络可以在完全遮挡,饱和和曝光不足的情况下自动幻觉合理的HDR细节,否则通过传统的优化方法几乎不可能恢复。我们的框架也可以扩展到不同的参考图像。我们进行了广泛的定性和定量比较,表明我们的方法可以产生出色的结果,与现有的最先进方法相比,色彩伪影和几何失真明显减少,并且在各种输入(包括没有辐射校准的图像)中都很稳健。
URL
https://arxiv.org/abs/1711.08937