Abstract
While feed-forward Gaussian splatting models provide computational efficiency and effectively handle sparse input settings, their performance is fundamentally limited by the reliance on a single forward pass during inference. We propose ReSplat, a feed-forward recurrent Gaussian splatting model that iteratively refines 3D Gaussians without explicitly computing gradients. Our key insight is that the Gaussian splatting rendering error serves as a rich feedback signal, guiding the recurrent network to learn effective Gaussian updates. This feedback signal naturally adapts to unseen data distributions at test time, enabling robust generalization. To initialize the recurrent process, we introduce a compact reconstruction model that operates in a $16 \times$ subsampled space, producing $16 \times$ fewer Gaussians than previous per-pixel Gaussian models. This substantially reduces computational overhead and allows for efficient Gaussian updates. Extensive experiments across varying of input views (2, 8, 16), resolutions ($256 \times 256$ to $540 \times 960$), and datasets (DL3DV and RealEstate10K) demonstrate that our method achieves state-of-the-art performance while significantly reducing the number of Gaussians and improving the rendering speed. Our project page is at this https URL.
Abstract (translated)
虽然前馈高斯点化模型提供了计算效率,并且能够有效处理稀疏输入设置,但其性能从根本上受到推理过程中仅依赖一次前向传递的限制。我们提出了ReSplat,一种基于反馈机制的递归式高斯点化模型,在不显式计算梯度的情况下迭代细化3D高斯分布。我们的关键洞察是:高斯点化的渲染误差可以作为丰富的反馈信号,指导递归网络学习有效的高斯更新。这种反馈信号自然地适应于测试时未见数据分布的变化,从而实现稳健的泛化能力。 为了初始化递归过程,我们引入了一个紧凑型重建模型,在16倍下采样的空间中操作,相比之前的每个像素点的高斯模型生成了16倍少的高斯分布。这大大减少了计算开销,并允许高效的高斯更新。 在不同的输入视图(2、8、16)、分辨率(从256x256到540x960)和数据集(DL3DV 和 RealEstate10K)上的广泛实验表明,我们的方法实现了最先进的性能,同时显著减少了高斯分布的数量并提高了渲染速度。有关我们项目的更多信息,请访问以下链接:[此项目页面的URL]。 请注意,上述文本中的"this https URL"应替换为实际的网址链接以便读者查看详情。
URL
https://arxiv.org/abs/2510.08575