Abstract
In recent years, Denoising Diffusion Probabilistic Models (DDPM) have caught significant attention. By composing a Markovian process that starts in the data domain and then gradually adds noise until reaching pure white noise, they achieve superior performance in learning data distributions. Yet, these models require a large number of diffusion steps to produce aesthetically pleasing samples, which is inefficient. In addition, unlike common generative adversarial networks, the latent space of diffusion models is not interpretable. In this work, we propose to generalize the denoising diffusion process into an Upsampling Diffusion Probabilistic Model (UDPM), in which we reduce the latent variable dimension in addition to the traditional noise level addition. As a result, we are able to sample images of size $256\times 256$ with only 7 diffusion steps, which is less than two orders of magnitude compared to standard DDPMs. We formally develop the Markovian diffusion processes of the UDPM, and demonstrate its generation capabilities on the popular FFHQ, LSUN horses, ImageNet, and AFHQv2 datasets. Another favorable property of UDPM is that it is very easy to interpolate its latent space, which is not the case with standard diffusion models. Our code is available online \url{this https URL}
Abstract (translated)
近年来,去噪扩散概率模型(DDPM)吸引了大量关注。通过构建始于数据域的马尔可夫过程,然后逐渐添加噪声,直到达到纯白色噪声的水平,这些模型在学习数据分布方面表现出更好的性能。然而,这些模型需要许多扩散步骤来产生审美上满意的样本,效率较低。此外,与常见的生成对抗网络不同,扩散模型的隐状态空间无法解释。在本文中,我们提议将去噪扩散过程泛化为增采样扩散概率模型(UDPM),其中我们除了传统的噪声水平增加外,还减少了隐变量维度。因此,我们只需要7个扩散步骤就能样本大小为256×256的图像,比标准DDPM的规模小得多。我们正式开发了UDPM的马尔可夫扩散过程,并在流行的FFHQ、LCNS horses、ImageNet和AFHQv2数据集上展示了其生成能力。UDPM的另一个有利特性是,它很容易进行隐状态空间的插值,而标准扩散模型则无法做到。我们的代码现在在线 \url{this https URL}。
URL
https://arxiv.org/abs/2305.16269