Abstract
One major challenge of disentanglement learning with variational autoencoders is the trade-off between disentanglement and reconstruction fidelity. Previous incremental methods with only on latent space cannot optimize these two targets simultaneously, so they expand the Information Bottleneck while training to {optimize from disentanglement to reconstruction. However, a large bottleneck will lose the constraint of disentanglement, causing the information diffusion problem. To tackle this issue, we present a novel decremental variational autoencoder with disentanglement-invariant transformations to optimize multiple objectives in different layers, termed DeVAE, for balancing disentanglement and reconstruction fidelity by decreasing the information bottleneck of diverse latent spaces gradually. Benefiting from the multiple latent spaces, DeVAE allows simultaneous optimization of multiple objectives to optimize reconstruction while keeping the constraint of disentanglement, avoiding information diffusion. DeVAE is also compatible with large models with high-dimension latent space. Experimental results on dSprites and Shapes3D that DeVAE achieves \fix{R2q6}{a good balance between disentanglement and reconstruction.DeVAE shows high tolerant of hyperparameters and on high-dimensional latent spaces.
Abstract (translated)
与变分自编码器进行分离学习的一个主要挑战是分离和重构精度之间的权衡。以前的增量方法只能涉及潜在空间,无法同时优化这两个目标,因此他们在训练时扩展了信息瓶颈,以从分离到重构进行优化。然而,一个大的瓶颈将失去分离的限制,导致信息扩散问题。为了解决这个问题,我们提出了一种 decremental 变分自编码器,具有不变性变换,在不同层中优化多个目标,名为 DeVAE,以平衡分离和重构的精度,通过逐渐减少不同潜在空间的信息瓶颈。从多个潜在空间中受益,DeVAE允许同时优化多个目标,优化重构,同时保持分离的限制,避免信息扩散。DeVAE也适用于具有高维度潜在空间的大型模型。在 dSprites 和 Shapes3D 的实验中,DeVAE 在分离和重构之间实现了良好的平衡。DeVAE 表现出对超参数和高维度潜在空间的容忍度。
URL
https://arxiv.org/abs/2303.12959