Denoising diffusion models have been a mainstream approach for image generation, however, training these models often suffers from slow convergence. In this paper, we discovered that the slow convergence is partly due to conflicting optimization directions between timesteps. To address this issue, we treat the diffusion training as a multi-task learning problem, and introduce a simple yet effective approach referred to as Min-SNR-$\gamma$. This method adapts loss weights of timesteps based on clamped signal-to-noise ratios, which effectively balances the conflicts among timesteps. Our results demonstrate a significant improvement in converging speed, 3.4$\times$ faster than previous weighting strategies. It is also more effective, achieving a new record FID score of 2.06 on the ImageNet $256\times256$ benchmark using smaller architectures than that employed in previous state-of-the-art.
去噪扩散模型是生成图像的主流方法,然而,训练这些模型往往面临缓慢收敛的问题。在这篇文章中,我们发现缓慢收敛部分原因是时间步之间的优化方向冲突。为了解决这个问题,我们将扩散训练视为多任务学习问题,并介绍了一种简单但有效的方法,称为Min-SNR-$gamma$。这种方法基于固定的信号噪声比来适应时间步的损失权重, effectively平衡了时间步之间的冲突。我们的结果显示,收敛速度得到了显著改善,比先前的加权策略快3.4倍。此外,它也更加有效,使用比先前最先进的方法使用的更小的架构,在ImageNet $256 imes256$基准测试中取得了新的记录FID得分2.06。