Abstract
Recent advancements in text-to-image generative systems have been largely driven by diffusion models. However, single-stage text-to-image diffusion models still face challenges, in terms of computational efficiency and the refinement of image details. To tackle the issue, we propose CogView3, an innovative cascaded framework that enhances the performance of text-to-image diffusion. CogView3 is the first model implementing relay diffusion in the realm of text-to-image generation, executing the task by first creating low-resolution images and subsequently applying relay-based super-resolution. This methodology not only results in competitive text-to-image outputs but also greatly reduces both training and inference costs. Our experimental results demonstrate that CogView3 outperforms SDXL, the current state-of-the-art open-source text-to-image diffusion model, by 77.0\% in human evaluations, all while requiring only about 1/2 of the inference time. The distilled variant of CogView3 achieves comparable performance while only utilizing 1/10 of the inference time by SDXL.
Abstract (translated)
近年来,在文本到图像生成系统领域的进步主要是由扩散模型推动的。然而,单阶段文本到图像扩散模型在计算效率和图像细节精炼方面仍然面临挑战。为解决这个问题,我们提出了CogView3,一种创新的级联框架,可以提高文本到图像扩散模型的性能。CogView3是第一个在文本到图像生成领域实现中继扩散的模型,通过先创建低分辨率图像,然后应用中继基于超分辨率来执行任务。这种方法不仅产生了竞争力的文本到图像输出,而且大大减少了训练和推理成本。我们的实验结果表明,CogView3在人类评估中超过了SDXL,而只用了SDXL的1/2的推理时间。CogView3的蒸馏变体在仅使用1/10的推理时间的同时,实现了与SDXL相当的表演。
URL
https://arxiv.org/abs/2403.05121