Abstract
While 3D Gaussian Splatting has recently become popular for neural rendering, current methods rely on carefully engineered cloning and splitting strategies for placing Gaussians, which does not always generalize and may lead to poor-quality renderings. In addition, for real-world scenes, they rely on a good initial point cloud to perform well. In this work, we rethink 3D Gaussians as random samples drawn from an underlying probability distribution describing the physical representation of the scene -- in other words, Markov Chain Monte Carlo (MCMC) samples. Under this view, we show that the 3D Gaussian updates are strikingly similar to a Stochastic Langevin Gradient Descent (SGLD) update. As with MCMC, samples are nothing but past visit locations, adding new Gaussians under our framework can simply be realized without heuristics as placing Gaussians at existing Gaussian locations. To encourage using fewer Gaussians for efficiency, we introduce an L1-regularizer on the Gaussians. On various standard evaluation scenes, we show that our method provides improved rendering quality, easy control over the number of Gaussians, and robustness to initialization.
Abstract (translated)
虽然最近3D高斯平铺在神经渲染中变得流行,但现有的方法依赖于仔细设计的克隆和分割策略来放置高斯分布,这并不总是通用,并可能导致渲染质量差。此外,对于现实世界的场景,它们依赖于一个良好的初始点云来表现出色。在这项工作中,我们将3D高斯视为来自描述场景物理表示的概率分布的随机样本——换句话说,随机过程蒙特卡洛(MCMC)样本。在这种观点下,我们证明了3D高斯更新与随机Langevin梯度下降(SGLD)更新非常相似。与MCMC一样,样本只是过去的访问位置,在我们的框架中添加新高斯分布只需要简单的策略,即在现有高斯位置放置新高斯。为了鼓励使用更少的Gaussians,我们在Gaussians上引入了L1正则化。在各种标准评估场景中,我们证明了我们的方法提供了改进的渲染质量,容易控制高斯数量,以及对初始化的鲁棒性。
URL
https://arxiv.org/abs/2404.09591