Paper Reading AI Learner

Nabla-R2D3: Effective and Efficient 3D Diffusion Alignment with 2D Rewards

2025-06-18 17:59:59
Qingming Liu, Zhen Liu, Dinghuai Zhang, Kui Jia

Abstract

Generating high-quality and photorealistic 3D assets remains a longstanding challenge in 3D vision and computer graphics. Although state-of-the-art generative models, such as diffusion models, have made significant progress in 3D generation, they often fall short of human-designed content due to limited ability to follow instructions, align with human preferences, or produce realistic textures, geometries, and physical attributes. In this paper, we introduce Nabla-R2D3, a highly effective and sample-efficient reinforcement learning alignment framework for 3D-native diffusion models using 2D rewards. Built upon the recently proposed Nabla-GFlowNet method, which matches the score function to reward gradients in a principled manner for reward finetuning, our Nabla-R2D3 enables effective adaptation of 3D diffusion models using only 2D reward signals. Extensive experiments show that, unlike vanilla finetuning baselines which either struggle to converge or suffer from reward hacking, Nabla-R2D3 consistently achieves higher rewards and reduced prior forgetting within a few finetuning steps.

Abstract (translated)

生成高质量和逼真的3D资产仍然是三维视觉和计算机图形学领域的长期挑战。尽管最先进的生成模型,如扩散模型,在3D生成方面取得了显著进展,但由于在遵循指令、符合人类偏好或产生逼真纹理、几何形状和物理属性方面的局限性,它们往往难以达到人工设计内容的水平。在这篇论文中,我们介绍了Nabla-R2D3,这是一个高效的强化学习对齐框架,用于基于2D奖励信号调整原生3D扩散模型。该框架建立在最近提出的Nabla-GFlowNet方法之上,后者通过将评分函数与奖励梯度匹配的方式,在原则基础上实现了奖励的微调功能。我们的Nabla-R2D3允许仅使用2D奖励信号就有效地调整3D扩散模型。 广泛的实验表明,不同于原始微调基准线(它们要么难以收敛,要么遭受奖励操控问题),Nabla-R2D3在几次微调步骤后始终能实现更高的奖励值并减少先前知识的遗忘。

URL

https://arxiv.org/abs/2506.15684

PDF

https://arxiv.org/pdf/2506.15684.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot