Novel view synthesis from limited observations remains an important and persistent task. However, high efficiency in existing NeRF-based few-shot view synthesis is often compromised to obtain an accurate 3D representation. To address this challenge, we propose a few-shot view synthesis framework based on 3D Gaussian Splatting that enables real-time and photo-realistic view synthesis with as few as three training views. The proposed method, dubbed FSGS, handles the extremely sparse initialized SfM points with a thoughtfully designed Gaussian Unpooling process. Our method iteratively distributes new Gaussians around the most representative locations, subsequently infilling local details in vacant areas. We also integrate a large-scale pre-trained monocular depth estimator within the Gaussians optimization process, leveraging online augmented views to guide the geometric optimization towards an optimal solution. Starting from sparse points observed from limited input viewpoints, our FSGS can accurately grow into unseen regions, comprehensively covering the scene and boosting the rendering quality of novel views. Overall, FSGS achieves state-of-the-art performance in both accuracy and rendering efficiency across diverse datasets, including LLFF, Mip-NeRF360, and Blender. Project website: this https URL.
基于有限观测的新视图合成仍然是一个重要的和持久的任务。然而，为了获得准确的3D表示，现有基于NeRF的少样本视图合成中的高效率往往牺牲了。为了应对这个挑战，我们提出了一个基于3D高斯平铺的少样本视图合成框架，使得只需要几个训练视图实现实时和照片实时的视图合成。所提出的方法被称为FSGS，通过一种经过精心设计的Gaussian Unpooling过程处理极其稀疏的初始化SfM点。我们的方法在Gaussians优化过程中逐步在最具代表性的位置分布新的高斯，然后填充空闲区域的局部细节。我们还将在Gaussians优化过程中集成一个大型预训练单目深度估计器，利用在线增强视图引导几何优化达到最优解。从有限的输入视点观测到的稀疏点开始，FSGS可以准确地生长到未见区域，全面覆盖场景，并提高新视图的渲染质量。总体而言，FSGS在各种数据集上的准确性和渲染效率都达到了最先进的水平，包括LLFF、Mip-NeRF360和Blender。项目网站：https://this URL。