Abstract
Generating high-quality and photorealistic 3D assets remains a longstanding challenge in 3D vision and computer graphics. Although state-of-the-art generative models, such as diffusion models, have made significant progress in 3D generation, they often fall short of human-designed content due to limited ability to follow instructions, align with human preferences, or produce realistic textures, geometries, and physical attributes. In this paper, we introduce Nabla-R2D3, a highly effective and sample-efficient reinforcement learning alignment framework for 3D-native diffusion models using 2D rewards. Built upon the recently proposed Nabla-GFlowNet method, which matches the score function to reward gradients in a principled manner for reward finetuning, our Nabla-R2D3 enables effective adaptation of 3D diffusion models using only 2D reward signals. Extensive experiments show that, unlike vanilla finetuning baselines which either struggle to converge or suffer from reward hacking, Nabla-R2D3 consistently achieves higher rewards and reduced prior forgetting within a few finetuning steps.
Abstract (translated)
生成高质量和逼真的3D资产仍然是三维视觉和计算机图形学领域的长期挑战。尽管最先进的生成模型,如扩散模型,在3D生成方面取得了显著进展,但由于在遵循指令、符合人类偏好或产生逼真纹理、几何形状和物理属性方面的局限性,它们往往难以达到人工设计内容的水平。在这篇论文中,我们介绍了Nabla-R2D3,这是一个高效的强化学习对齐框架,用于基于2D奖励信号调整原生3D扩散模型。该框架建立在最近提出的Nabla-GFlowNet方法之上,后者通过将评分函数与奖励梯度匹配的方式,在原则基础上实现了奖励的微调功能。我们的Nabla-R2D3允许仅使用2D奖励信号就有效地调整3D扩散模型。 广泛的实验表明,不同于原始微调基准线(它们要么难以收敛,要么遭受奖励操控问题),Nabla-R2D3在几次微调步骤后始终能实现更高的奖励值并减少先前知识的遗忘。
URL
https://arxiv.org/abs/2506.15684