Abstract
In this paper, we propose a new challenge that synthesizes a novel view in a more practical environment, where the number of input multi-view images is limited and illumination variations are significant. Despite recent success, neural radiance fields (NeRF) require a massive amount of input multi-view images taken under constrained illuminations. To address the problem, we suggest ExtremeNeRF, which utilizes occlusion-aware multiview albedo consistency, supported by geometric alignment and depth consistency. We extract intrinsic image components that should be illumination-invariant across different views, enabling direct appearance comparison between the input and novel view under unconstrained illumination. We provide extensive experimental results for an evaluation of the task, using the newly built NeRF Extreme benchmark, which is the first in-the-wild novel view synthesis benchmark taken under multiple viewing directions and varying illuminations. The project page is at this https URL
Abstract (translated)
在本文中,我们提出了一个新的挑战,可以在更加实用的环境内合成新的视图,其中输入的多方视图数量有限,照明变化也比较严重。尽管最近取得了成功,神经网络光场(NeRF)需要大量的在限定照明条件下拍摄的多方视图。为了解决这一问题,我们建议使用极限NeRF,它利用有 occlusion-aware 的多方视光一致性,支持几何对齐和深度一致性。我们提取了应该在不同视图之间具有照明不变的固有图像成分,以便在不受限制照明条件下,对输入和新的视图进行直接的外观比较。我们使用新建立的极限NeRF基准测试来评估任务,这是从多个视角和不同照明条件下收集的首个野生新视图合成基准测试。项目页面在此 https URL。
URL
https://arxiv.org/abs/2303.11728