Abstract
We propose an Authentic Discrete Diffusion (ADD) framework that fundamentally redefines prior pseudo-discrete approaches by preserving core diffusion characteristics directly in the one-hot space through a suite of coordinated mechanisms. Unlike conventional "pseudo" discrete diffusion (PDD) methods, ADD reformulates the diffusion input by directly using float-encoded one-hot class data, without relying on diffusing in the continuous latent spaces or masking policies. At its core, a timestep-conditioned cross-entropy loss is introduced between the diffusion model's outputs and the original one-hot labels. This synergistic design establishes a bridge between discriminative and generative learning. Our experiments demonstrate that ADD not only achieves superior performance on classification tasks compared to the baseline, but also exhibits excellent text generation capabilities on Image captioning. Extensive ablations validate the measurable gains of each component.
Abstract (translated)
我们提出了一种名为Authentic Discrete Diffusion(ADD)的框架,该框架从根本上重新定义了以前的伪离散方法,通过一系列协调机制直接在one-hot空间中保持核心扩散特性。与传统的“伪”离散扩散(PDD)方法不同,ADD通过直接使用浮点编码的一热类数据作为扩散输入,而无需依赖连续潜在空间或屏蔽策略来扩散。 在核心方面,我们引入了一个基于时间步的交叉熵损失,用于衡量扩散模型输出和原始one-hot标签之间的差异。这种协同设计建立了一座判别式学习与生成式学习之间的桥梁。 实验结果表明,ADD不仅在分类任务上的表现优于基线方法,在图像描述等文本生成能力方面也表现出色。详尽的消融研究验证了每个组件的实际改进效果。
URL
https://arxiv.org/abs/2510.01047