Abstract
We present GenesisTex, a novel method for synthesizing textures for 3D geometries from text descriptions. GenesisTex adapts the pretrained image diffusion model to texture space by texture space sampling. Specifically, we maintain a latent texture map for each viewpoint, which is updated with predicted noise on the rendering of the corresponding viewpoint. The sampled latent texture maps are then decoded into a final texture map. During the sampling process, we focus on both global and local consistency across multiple viewpoints: global consistency is achieved through the integration of style consistency mechanisms within the noise prediction network, and low-level consistency is achieved by dynamically aligning latent textures. Finally, we apply reference-based inpainting and img2img on denser views for texture refinement. Our approach overcomes the limitations of slow optimization in distillation-based methods and instability in inpainting-based methods. Experiments on meshes from various sources demonstrate that our method surpasses the baseline methods quantitatively and qualitatively.
Abstract (translated)
我们提出了GenesisTex,一种从文本描述中合成3D几何纹理的新方法。GenesisTex通过纹理空间采样来适应预训练的图像扩散模型。具体来说,我们为每个视点维护一个潜在纹理映射,该映射在对应视点的渲染预测噪声上进行更新。采样过程包括将采样到的潜在纹理映射解密为最终纹理映射。在采样过程中,我们关注多个视点之间的全局和局部一致性:全局一致性通过噪声预测网络内的风格一致性机制实现,而低级一致性通过动态对齐潜在纹理实现。最后,我们将基于参考的修复方法和img2img应用于密度较高的纹理精饰中。我们对各种源的mesh进行的实验表明,我们的方法在数量和质量上超过了基线方法。
URL
https://arxiv.org/abs/2403.17782