Abstract
Recent works on generalizable NeRFs have shown promising results on novel view synthesis from single or few images. However, such models have rarely been applied on other downstream tasks beyond synthesis such as semantic understanding and parsing. In this paper, we propose a novel framework named FeatureNeRF to learn generalizable NeRFs by distilling pre-trained vision foundation models (e.g., DINO, Latent Diffusion). FeatureNeRF leverages 2D pre-trained foundation models to 3D space via neural rendering, and then extract deep features for 3D query points from NeRF MLPs. Consequently, it allows to map 2D images to continuous 3D semantic feature volumes, which can be used for various downstream tasks. We evaluate FeatureNeRF on tasks of 2D/3D semantic keypoint transfer and 2D/3D object part segmentation. Our extensive experiments demonstrate the effectiveness of FeatureNeRF as a generalizable 3D semantic feature extractor. Our project page is available at this https URL .
Abstract (translated)
最近,关于可泛化的NeRF的研究取得了在单个或少量图像中生成全新视角的 promising 成果。然而,这类模型在除合成以外的其他下游任务方面的应用却极其罕见,例如语义理解和解构。在本文中,我们提出了一个名为FeatureNeRF的新框架,通过蒸馏预先训练的视觉基元模型(例如DiNO和Latent Diffusion)来学习可泛化的NeRF。FeatureNeRF利用2D预先训练基元模型到3D空间中,然后从NeRFMLP中提取深度特征以3D查询点。因此,它允许将2D图像映射到连续的3D语义特征体积中,这些体积可以用于各种下游任务。我们针对2D/3D语义关键点转移和2D/3D物体部分分割等任务进行了评估,我们的广泛实验证明了FeatureNeRF作为可泛化的3D语义特征提取器的 effectiveness。我们的项目页面可用在此httpsURL上。
URL
https://arxiv.org/abs/2303.12786