Abstract
Guided diffusion is a technique for conditioning the output of a diffusion model at sampling time without retraining the network for each specific task. One drawback of diffusion models, however, is their slow sampling process. Recent techniques can accelerate unguided sampling by applying high-order numerical methods to the sampling process when viewed as differential equations. On the contrary, we discover that the same techniques do not work for guided sampling, and little has been explored about its acceleration. This paper explores the culprit of this problem and provides a solution based on operator splitting methods, motivated by our key finding that classical high-order numerical methods are unsuitable for the conditional function. Our proposed method can re-utilize the high-order methods for guided sampling and can generate images with the same quality as a 250-step DDIM baseline using 32-58% less sampling time on ImageNet256. We also demonstrate usage on a wide variety of conditional generation tasks, such as text-to-image generation, colorization, inpainting, and super-resolution.
Abstract (translated)
Guided diffusion是一种技术,可以在采样时对扩散模型的输出进行条件化,而无需对网络进行每个具体任务的训练。然而,扩散模型的一个缺点是它们的采样过程较慢。最近的技术可以通过将采样过程视为差分方程应用高阶数值方法来加速非引导采样。相反,我们发现,同样的技术对于引导采样并不适用,并且关于它的加速研究较少。本文探讨了这个问题的根源,并基于操作分治方法提出了解决方案,因为我们的关键发现是高阶经典数值方法不适合条件函数。我们提出的这种方法可以再次利用高阶方法用于引导采样,在ImageNet256上生成与250步DDIM基线相同的质量的图像,使用比引导采样少32-58%的采样时间。我们还展示了使用多种条件生成任务的应用,例如文本到图像生成、色彩化、填充和超分辨率。
URL
https://arxiv.org/abs/2301.11558