Abstract
Most existing image compression approaches perform transform coding in the pixel space to reduce its spatial redundancy. However, they encounter difficulties in achieving both high-realism and high-fidelity at low bitrate, as the pixel-space distortion may not align with human perception. To address this issue, we introduce a Generative Latent Coding (GLC) architecture, which performs transform coding in the latent space of a generative vector-quantized variational auto-encoder (VQ-VAE), instead of in the pixel space. The generative latent space is characterized by greater sparsity, richer semantic and better alignment with human perception, rendering it advantageous for achieving high-realism and high-fidelity compression. Additionally, we introduce a categorical hyper module to reduce the bit cost of hyper-information, and a code-prediction-based supervision to enhance the semantic consistency. Experiments demonstrate that our GLC maintains high visual quality with less than 0.04 bpp on natural images and less than 0.01 bpp on facial images. On the CLIC2020 test set, we achieve the same FID as MS-ILLM with 45% fewer bits. Furthermore, the powerful generative latent space enables various applications built on our GLC pipeline, such as image restoration and style transfer. The code is available at this https URL.
Abstract (translated)
大多数现有的图像压缩方法在像素空间中进行变换编码,以减少其空间冗余。然而,在低比特率下实现高真实感和高保真度方面遇到了困难,因为像素空间中的失真可能与人类感知不一致。为了解决这个问题,我们引入了一种生成式潜在编码(GLC)架构,该架构在生成向量量化变分自编码器(VQ-VAE)的潜在空间中进行变换编码,而不是在像素空间中。生成式潜在空间以其更高的稀疏性、更丰富的语义以及更好的人类感知一致性为特点,这使得它更适合实现高真实感和高保真的压缩效果。 此外,我们还引入了一种类别超模块以减少超信息的比特成本,并通过基于代码预测的监督来增强语义一致性。实验表明,在自然图像上我们的GLC在小于0.04 bpp的情况下能够保持高质量视觉效果,在面部图像上则是在小于0.01 bpp的情况下实现这一点。在CLIC2020测试集上,我们与MS-ILLM相比,实现了相同的FID分数但比特率减少了45%。 此外,强大的生成式潜在空间使我们的GLC管道能够支持多种应用,例如图像修复和风格转换。代码可在以下链接获取:[此URL](请将“this https URL”替换为实际的GitHub或相关存储库链接)。
URL
https://arxiv.org/abs/2512.20194