Abstract
We present an novel framework for efficiently and effectively extending the powerful continuous diffusion processes to discrete modeling. Previous approaches have suffered from the discrepancy between discrete data and continuous modeling. Our study reveals that the absence of guidance from discrete boundaries in learning probability contours is one of the main reasons. To address this issue, we propose a two-step forward process that first estimates the boundary as a prior distribution and then rescales the forward trajectory to construct a boundary conditional diffusion model. The reverse process is proportionally adjusted to guarantee that the learned contours yield more precise discrete data. Experimental results indicate that our approach achieves strong performance in both language modeling and discrete image generation tasks. In language modeling, our approach surpasses previous state-of-the-art continuous diffusion language models in three translation tasks and a summarization task, while also demonstrating competitive performance compared to auto-regressive transformers. Moreover, our method achieves comparable results to continuous diffusion models when using discrete ordinal pixels and establishes a new state-of-the-art for categorical image generation on the Cifar-10 dataset.
Abstract (translated)
我们提出了一种新颖的框架,用于高效且有效地将强大的连续扩散过程扩展到离散建模中。先前的方法在处理离散数据和连续建模之间的差异时遇到了困难。我们的研究表明,在学习概率轮廓时缺乏来自离界界的指导是主要问题之一。为了解决这一问题,我们提出了一种两步正向过程:首先估计边界作为先验分布,然后重新缩放前向轨迹以构建一个边界条件扩散模型。反向过程相应调整以确保所学的轮廓产生更精确的离散数据。实验结果表明,我们的方法在语言建模和离散图像生成任务中均表现出强大的性能。在语言建模方面,与之前的连续扩散语言模型相比,我们的方法在三项翻译任务和一项摘要任务上都取得了更好的成绩,并且还展示了与自回归变压器相当的性能。此外,在使用离散有序像素时,我们的方法达到了与连续扩散模型可比的结果,并在Cifar-10数据集上的类别图像生成方面建立了新的先进水平。
URL
https://arxiv.org/abs/2410.22380