Abstract
Recent developments in neural rendering techniques have greatly enhanced the rendering of photo-realistic 3D scenes across both academic and commercial fields. The latest method, known as 3D Gaussian Splatting (3D-GS), has set new benchmarks for rendering quality and speed. Nevertheless, the limitations of 3D-GS become pronounced in synthesizing new viewpoints, especially for views that greatly deviate from those seen during training. Additionally, issues such as dilation and aliasing arise when zooming in or out. These challenges can all be traced back to a single underlying issue: insufficient sampling. In our paper, we present a bootstrapping method that significantly addresses this problem. This approach employs a diffusion model to enhance the rendering of novel views using trained 3D-GS, thereby streamlining the training process. Our results indicate that bootstrapping effectively reduces artifacts, as well as clear enhancements on the evaluation metrics. Furthermore, we show that our method is versatile and can be easily integrated, allowing various 3D reconstruction projects to benefit from our approach.
Abstract (translated)
近年来,神经渲染技术的发展已经极大地提高了照片写实3D场景的渲染效果,无论是学术还是商业领域。最新的方法,称为3D高斯展平(3D-GS)方法,为渲染质量和速度设定了新的基准。然而,3D-GS的局限性在生成新的视角时表现得尤为突出,特别是对于在训练过程中观察到的视角有着很大偏离的视角。此外,在缩放时会出现扩散和混叠等问题。这些问题都可以追溯到一个根本问题:不足的采样。在本文中,我们提出了一个 bootstrapping 方法,显著地解决了这个问题。这种方法采用扩散模型来增强使用训练后的3D-GS生成新视角,从而简化训练过程。我们的结果表明,通过bootstrap有效地减少了伪影,同时在评估指标上显示出明显的增强。此外,我们还证明了我们的方法是灵活的,可以轻松地与其他3D重建项目集成,从而使各种3D项目都能从我们的方法中受益。
URL
https://arxiv.org/abs/2404.18669