Abstract
We present a novel framework to regularize Neural Radiance Field (NeRF) in a few-shot setting with a geometry-aware consistency regularization. The proposed approach leverages a rendered depth map at unobserved viewpoint to warp sparse input images to the unobserved viewpoint and impose them as pseudo ground truths to facilitate learning of NeRF. By encouraging such geometry-aware consistency at a feature-level instead of using pixel-level reconstruction loss, we regularize the NeRF at semantic and structural levels while allowing for modeling view dependent radiance to account for color variations across viewpoints. We also propose an effective method to filter out erroneous warped solutions, along with training strategies to stabilize training during optimization. We show that our model achieves competitive results compared to state-of-the-art few-shot NeRF models. Project page is available at this https URL.
Abstract (translated)
我们提出了一个 novel 框架,用于在几个样本量的情况下,通过几何aware的一致性Regularization, regularize Neural Radiance Field (NeRF)。我们的方法利用未观察视角下的渲染深度图,将稀疏输入图像扭曲到未观察视角下,并将它们作为伪地面 truth 进行强制约束,以方便学习 NeRF。通过鼓励这种几何aware的一致性在特征级别而不是像素级别上使用重建损失,我们在语义和结构级别上 Regularize NeRF,同时允许建模视角依赖的辐射,以考虑视角之间的颜色变化。我们还提出了一种有效的方法来过滤掉错误的扭曲解决方案,并提出了训练策略,以在优化期间稳定训练。我们表明,我们的模型与最先进的几个样本量的 NeRF 模型相比,取得了竞争的结果。项目页面在此 https URL 可用。
URL
https://arxiv.org/abs/2301.10941