Abstract
We present MatDecompSDF, a novel framework for recovering high-fidelity 3D shapes and decomposing their physically-based material properties from multi-view images. The core challenge of inverse rendering lies in the ill-posed disentanglement of geometry, materials, and illumination from 2D observations. Our method addresses this by jointly optimizing three neural components: a neural Signed Distance Function (SDF) to represent complex geometry, a spatially-varying neural field for predicting PBR material parameters (albedo, roughness, metallic), and an MLP-based model for capturing unknown environmental lighting. The key to our approach is a physically-based differentiable rendering layer that connects these 3D properties to the input images, allowing for end-to-end optimization. We introduce a set of carefully designed physical priors and geometric regularizations, including a material smoothness loss and an Eikonal loss, to effectively constrain the problem and achieve robust decomposition. Extensive experiments on both synthetic and real-world datasets (e.g., DTU) demonstrate that MatDecompSDF surpasses state-of-the-art methods in geometric accuracy, material fidelity, and novel view synthesis. Crucially, our method produces editable and relightable assets that can be seamlessly integrated into standard graphics pipelines, validating its practical utility for digital content creation.
Abstract (translated)
我们介绍了MatDecompSDF,这是一个新颖的框架,用于从多视角图像中恢复高保真3D形状并分解其基于物理特性的材料属性。逆向渲染的核心挑战在于从二维观察结果中分离几何、材质和照明,这一过程本质上是病态且难以处理的。我们的方法通过优化三个神经网络组件来解决这个问题:一个用于表示复杂几何结构的神经Signed Distance Function(SDF)、一个预测基于物理渲染(PBR)材料参数(如反射率、粗糙度和金属光泽)的空间变化神经场,以及一种基于多层感知器(MLP)的模型,用于捕捉未知环境照明。我们方法的关键在于引入了一种基于物理特性的可微分渲染层,它将3D属性与输入图像连接起来,从而实现端到端优化。 此外,我们设计了一系列精心构造的物理先验和几何正则化措施,包括材料平滑度损失(Eikonal损失),以有效地约束问题并达成稳健分解。在合成数据集和真实世界数据集(如DTU)上进行的广泛实验表明,MatDecompSDF在几何精度、材质保真度以及新视角合成方面均超越了现有最先进的方法。 尤为重要的是,我们的方法生成了可编辑且可以重新照明的资产,并能无缝地集成到标准图形工作流程中,从而验证了其对于数字内容创作的实际效用。
URL
https://arxiv.org/abs/2507.04749