Abstract
Recently, 3D Gaussian Splatting (3DGS) has demonstrated impressive novel view synthesis results, while allowing the rendering of high-resolution images in real-time. However, leveraging 3D Gaussians for surface reconstruction poses significant challenges due to the explicit and disconnected nature of 3D Gaussians. In this work, we present Gaussian Opacity Fields (GOF), a novel approach for efficient, high-quality, and compact surface reconstruction in unbounded scenes. Our GOF is derived from ray-tracing-based volume rendering of 3D Gaussians, enabling direct geometry extraction from 3D Gaussians by identifying its levelset, without resorting to Poisson reconstruction or TSDF fusion as in previous work. We approximate the surface normal of Gaussians as the normal of the ray-Gaussian intersection plane, enabling the application of regularization that significantly enhances geometry. Furthermore, we develop an efficient geometry extraction method utilizing marching tetrahedra, where the tetrahedral grids are induced from 3D Gaussians and thus adapt to the scene's complexity. Our evaluations reveal that GOF surpasses existing 3DGS-based methods in surface reconstruction and novel view synthesis. Further, it compares favorably to, or even outperforms, neural implicit methods in both quality and speed.
Abstract (translated)
近年来,3D高斯平铺(3DGS)已经展示了令人印象深刻的全新视图合成结果,同时允许在实时渲染高分辨率图像。然而,利用3D高斯进行表面复原存在重大挑战,因为3D高斯具有显式和离散的性质。在本文中,我们提出了 Gaussian Opacity Fields (GOF),一种用于在无边界场景中实现高效、高质量和紧凑表面复原的新方法。我们的GOF是基于3D高斯的光线追踪体积渲染派生而来的,通过确定其境界线,直接从3D高斯中提取几何,而不需要求助于Poisson重建或TSDF融合,如以前的工作。我们将高斯表面的法线近似为光线与高斯平面相交平面的法线,使得可以应用正则化,显著增强几何。此外,我们开发了一种利用步进四边形进行有效几何提取的方法,其中四边形网格是由3D高斯引起的,因此可以适应场景的复杂性。我们的评估显示,GOF在表面复原和全新视图合成方面超过了现有的3DGS方法。此外,它与神经隐式方法在质量和速度上相比,具有优势,甚至优异。
URL
https://arxiv.org/abs/2404.10772