Abstract
Recent advancements in diffusion models have shown remarkable proficiency in editing 2D images based on text prompts. However, extending these techniques to edit scenes in Neural Radiance Fields (NeRF) is complex, as editing individual 2D frames can result in inconsistencies across multiple views. Our crucial insight is that a NeRF scene's geometry can serve as a bridge to integrate these 2D edits. Utilizing this geometry, we employ a depth-conditioned ControlNet to enhance the coherence of each 2D image modification. Moreover, we introduce an inpainting approach that leverages the depth information of NeRF scenes to distribute 2D edits across different images, ensuring robustness against errors and resampling challenges. Our results reveal that this methodology achieves more consistent, lifelike, and detailed edits than existing leading methods for text-driven NeRF scene editing.
Abstract (translated)
近年来,扩散模型的进步在基于文本提示编辑二维图像方面表现出了出色的能力。然而,将这种技术扩展到神经辐射场(NeRF)编辑场景中,则是复杂的,因为编辑单个2D帧会导致多个视图之间出现不一致性。我们关键的见解是,NeRF场景的几何可以作为整合这些2D编辑的桥梁。利用这种几何,我们采用深度条件控制网络增强每个2D图像修改的连贯性。此外,我们还引入了一种利用NeRF场景深度信息分布2D编辑到不同图像的修复方法,确保在错误和重新采样挑战方面的稳健性。我们的结果表明,这种方法实现了比现有领先方法更一致、更逼真、更详细的编辑,而这些方法都是基于文本驱动的NeRF场景编辑。
URL
https://arxiv.org/abs/2404.04526