Abstract
Radiance fields have emerged as a predominant representation for modeling 3D scene appearance. Neural formulations such as Neural Radiance Fields provide high expressivity but require costly ray marching for rendering, whereas primitive-based methods such as 3D Gaussian Splatting offer real-time efficiency through splatting, yet at the expense of representational power. Inspired by advances in both these directions, we introduce splattable neural primitives, a new volumetric representation that reconciles the expressivity of neural models with the efficiency of primitive-based splatting. Each primitive encodes a bounded neural density field parameterized by a shallow neural network. Our formulation admits an exact analytical solution for line integrals, enabling efficient computation of perspectively accurate splatting kernels. As a result, our representation supports integration along view rays without the need for costly ray marching. The primitives flexibly adapt to scene geometry and, being larger than prior analytic primitives, reduce the number required per scene. On novel-view synthesis benchmarks, our approach matches the quality and speed of 3D Gaussian Splatting while using $10\times$ fewer primitives and $6\times$ fewer parameters. These advantages arise directly from the representation itself, without reliance on complex control or adaptation frameworks. The project page is this https URL.
Abstract (translated)
辐射场已成为建模三维场景外观的主要表示形式。神经方法如神经辐射场提供了高度的表达能力,但需要昂贵的光线追踪来渲染;而基于基本元素的方法(例如3D高斯点阵)通过点阵实现了实时效率,但却牺牲了表现力。受这两种方向进步的启发,我们引入了一种新的体素表示形式:可点阵化的神经原始元。这种新表征在保留神经模型表达能力的同时提高了基于原始元素的点阵方法的效率。 每个原始元编码一个由浅层神经网络参数化的有界神经密度场。我们的公式允许线积分的精确解析解,从而能够高效地计算透视准确的点阵核函数。因此,这种表示方式支持视图光线沿其路径进行集成而无需昂贵的光线追踪过程。这些原始元可以灵活适应场景几何形状,并且由于比先前分析性基本元素更大,它们每场景所需数量较少。 在新视角合成基准测试中,我们的方法与3D高斯点阵的质量和速度相当,但使用了10倍少的基本元和6倍少的参数。这些优势直接源自表示形式本身,并无需依赖复杂的控制或适应框架。 项目页面位于 [这里](https://这个URL应该被替换为实际链接)(注意:原文中的具体网址未提供,因此应根据实际情况填写正确的链接)。
URL
https://arxiv.org/abs/2510.08491