Abstract
Procedural noise is a fundamental component of computer graphics pipelines, offering a flexible way to generate textures that exhibit "natural" random variation. Many different types of noise exist, each produced by a separate algorithm. In this paper, we present a single generative model which can learn to generate multiple types of noise as well as blend between them. In addition, it is capable of producing spatially-varying noise blends despite not having access to such data for training. These features are enabled by training a denoising diffusion model using a novel combination of data augmentation and network conditioning techniques. Like procedural noise generators, the model's behavior is controllable via interpretable parameters and a source of randomness. We use our model to produce a variety of visually compelling noise textures. We also present an application of our model to improving inverse procedural material design; using our model in place of fixed-type noise nodes in a procedural material graph results in higher-fidelity material reconstructions without needing to know the type of noise in advance.
Abstract (translated)
过程噪声是计算机图形流水线的一个基本组成部分,提供了一种生成具有“自然”随机变异的纹理的方式。有许多不同类型的噪声,每个都是由独立的算法生成的。在本文中,我们提出了一个单生成模型,可以学习生成多种类型的噪声以及将它们混合。此外,它还能够在没有访问到这种数据进行训练的情况下生成空间随机的噪声混合。这些特点是由使用新型的数据增强和网络调节技术训练去噪扩散模型而实现的。与过程噪声生成器类似,模型的行为可以通过可解释的参数和随机性的来源进行控制。我们使用我们的模型来生成各种视觉上令人印象深刻的噪声纹理。我们还介绍了一个使用我们模型的改进反向过程材料设计的应用;将我们的模型代替固定类型噪声节点在图状材料图中,可以实现更高保真的材料重构,而无需提前知道噪声的类型。
URL
https://arxiv.org/abs/2404.16292