Abstract
Reconstructing a 3D point cloud from a given conditional sketch is challenging. Existing methods often work directly in 3D space, but domain variability and difficulty in reconstructing accurate 3D structures from 2D sketches remain significant obstacles. Moreover, ideal models should also accept prompts for control, in addition with the sparse sketch, posing challenges in multi-modal fusion. We propose DiffS-NOCS (Diffusion-based Sketch-to-NOCS Map), which leverages ControlNet with a modified multi-view decoder to generate NOCS maps with embedded 3D structure and position information in 2D space from sketches. The 3D point cloud is reconstructed by combining multiple NOCS maps from different views. To enhance sketch understanding, we integrate a viewpoint encoder for extracting viewpoint features. Additionally, we design a feature-level multi-view aggregation network as the denoising module, facilitating cross-view information exchange and improving 3D consistency in NOCS map generation. Experiments on ShapeNet demonstrate that DiffS-NOCS achieves controllable and fine-grained point cloud reconstruction aligned with sketches.
Abstract (translated)
从给定的条件草图重建三维点云是一项挑战。现有的方法通常直接在三维空间中工作,但领域变异性以及从二维草图准确重建三维结构的难度仍然是主要障碍。此外,理想模型还应该能够接受提示进行控制,在稀疏草图的基础上增加了多模态融合的挑战。我们提出了DiffS-NOCS(基于扩散的Sketch-to-NOCS Map),该方法利用修改后的多视图解码器结合ControlNet从草图生成嵌入有3D结构和位置信息的二维NOCS地图。通过将不同视角下的多个NOCS地图结合起来,可以重建三维点云。为了增强对草图的理解,我们整合了一个视角编码器来提取视角特征。此外,我们设计了一种以特征级多视图聚合网络作为去噪模块,促进了跨视图信息交换,并提高了在生成NOCS地图时的3D一致性。实验表明,在ShapeNet数据集上的结果证明了DiffS-NOCS能够实现与草图一致、可控且精细的点云重建。
URL
https://arxiv.org/abs/2506.12835