Abstract
In this paper, targeting to understand the underlying explainable factors behind observations and modeling the conditional generation process on these factors, we propose a new task, disentanglement of diffusion probabilistic models (DPMs), to take advantage of the remarkable modeling ability of DPMs. To tackle this task, we further devise an unsupervised approach named DisDiff. For the first time, we achieve disentangled representation learning in the framework of diffusion probabilistic models. Given a pre-trained DPM, DisDiff can automatically discover the inherent factors behind the image data and disentangle the gradient fields of DPM into sub-gradient fields, each conditioned on the representation of each discovered factor. We propose a novel Disentangling Loss for DisDiff to facilitate the disentanglement of the representation and sub-gradients. The extensive experiments on synthetic and real-world datasets demonstrate the effectiveness of DisDiff.
Abstract (translated)
在本文中,我们旨在理解观察背后的可解释因素,并基于这些因素建模条件生成过程,因此我们提出了一个新的任务,即扩散概率模型的分离(DPMs的分离),以利用DMs的出色建模能力。为了解决这个问题,我们设计了一种新的无监督方法,名为DisDiff。首次,我们在扩散概率模型的框架下实现了分离表示学习。给定一个训练好的DPM,DisDiff可以自动发现图像数据背后的固有因素,并将DPM梯度场分离为子梯度场,每个子梯度场都基于每个发现的因素的表示。我们提出了一种新的分离损失,以促进表示和子梯度的分离。对合成数据和实际数据集的广泛实验证明了DisDiff的有效性。
URL
https://arxiv.org/abs/2301.13721