Abstract
We introduce a joint diffusion model that simultaneously learns meaningful internal representations fit for both generative and predictive tasks. Joint machine learning models that allow synthesizing and classifying data often offer uneven performance between those tasks or are unstable to train. In this work, we depart from a set of empirical observations that indicate the usefulness of internal representations built by contemporary deep diffusion-based generative models in both generative and predictive settings. We then introduce an extension of the vanilla diffusion model with a classifier that allows for stable joint training with shared parametrization between those objectives. The resulting joint diffusion model offers superior performance across various tasks, including generative modeling, semi-supervised classification, and domain adaptation.
Abstract (translated)
我们引入了一种联合扩散模型,可以同时学习适用于生成和预测任务有意义的内部表示。允许合成和分类数据的联合机器学习模型通常在这些任务之间提供不平等的表现或者训练不稳定。在这项工作中,我们离开了一组经验观察,表明当代深度扩散基于生成模型在生成和预测设置中的内部表示的有用性。然后引入了无标签扩散模型的一个扩展,以允许稳定的联合训练,并在这些目标之间的共享参数条件下进行。 resulting的联合扩散模型提供了在各种任务中卓越的表现,包括生成建模、半监督分类和跨域适应。
URL
https://arxiv.org/abs/2301.13622