Abstract
Generative adversial network (GAN) is a type of generative model that maps a high-dimensional noise to samples in target distribution. However, the dimension of noise required in GAN is not well understood. Previous approaches view GAN as a mapping from a continuous distribution to another continous distribution. In this paper, we propose to view GAN as a discrete sampler instead. From this perspective, we build a connection between the minimum noise required and the bits to losslessly compress the images. Furthermore, to understand the behaviour of GAN when noise dimension is limited, we propose divergence-entropy trade-off. This trade-off depicts the best divergence we can achieve when noise is limited. And as rate distortion trade-off, it can be numerically solved when source distribution is known. Finally, we verifies our theory with experiments on image generation.
Abstract (translated)
生成对抗网络(GAN)是一种生成模型,它将高维噪声映射到目标分布中的样本。然而,GAN所需的噪声维度并不清楚。之前的方法将GAN视为从连续分布到另一个连续分布的映射。在本文中,我们提出了将GAN视为离散采样器的观点。从这种角度来看,我们建立了最低噪声所需量和图像无损压缩所需的比特数之间的联系。此外,为了了解GAN在噪声维度受限时的行为,我们提出了熵增益 trade-off。这个 trade-off 描述了当噪声受限时我们能实现的最佳熵。当源分布已知时,它可以通过数值求解得到。最后,我们通过图像生成实验验证了我们的理论。
URL
https://arxiv.org/abs/2403.09196