Abstract
Transforming large pre-trained low-resolution diffusion models to cater to higher-resolution demands, i.e., diffusion extrapolation, significantly improves diffusion adaptability. We propose tuning-free CutDiffusion, aimed at simplifying and accelerating the diffusion extrapolation process, making it more affordable and improving performance. CutDiffusion abides by the existing patch-wise extrapolation but cuts a standard patch diffusion process into an initial phase focused on comprehensive structure denoising and a subsequent phase dedicated to specific detail refinement. Comprehensive experiments highlight the numerous almighty advantages of CutDiffusion: (1) simple method construction that enables a concise higher-resolution diffusion process without third-party engagement; (2) fast inference speed achieved through a single-step higher-resolution diffusion process, and fewer inference patches required; (3) cheap GPU cost resulting from patch-wise inference and fewer patches during the comprehensive structure denoising; (4) strong generation performance, stemming from the emphasis on specific detail refinement.
Abstract (translated)
将大型预训练低分辨率扩散模型转换为满足更高分辨率需求,即扩散扩展,显著提高了扩散适应性。我们提出了一种无需调整的CutDiffusion,旨在简化并加速扩散扩展过程,使其更加经济且提高性能。CutDiffusion遵循现有的补丁扩展过程,但将标准补丁扩散过程切割成关注全面结构去噪和具体细节精炼的初始阶段,随后阶段为特定细节精炼。全面的实验突出了CutDiffusion的优势:(1)简单的方法构建使得简洁的高分辨率扩散过程成为可能,而无需第三方参与;(2)通过单步高分辨率扩散过程实现快速推理速度,且需要的推理补丁较少;(3)由于补丁推理和全面结构去噪阶段的便宜GPU成本,实现了较低的GPU成本;(4)强调具体细节精炼,从而实现强大的生成性能。
URL
https://arxiv.org/abs/2404.15141