Paper Reading AI Learner

Generative Latent Coding for Ultra-Low Bitrate Image Compression

2025-12-23 09:35:40
Zhaoyang Jia, Jiahao Li, Bin Li, Houqiang Li, Yan Lu

Abstract

Most existing image compression approaches perform transform coding in the pixel space to reduce its spatial redundancy. However, they encounter difficulties in achieving both high-realism and high-fidelity at low bitrate, as the pixel-space distortion may not align with human perception. To address this issue, we introduce a Generative Latent Coding (GLC) architecture, which performs transform coding in the latent space of a generative vector-quantized variational auto-encoder (VQ-VAE), instead of in the pixel space. The generative latent space is characterized by greater sparsity, richer semantic and better alignment with human perception, rendering it advantageous for achieving high-realism and high-fidelity compression. Additionally, we introduce a categorical hyper module to reduce the bit cost of hyper-information, and a code-prediction-based supervision to enhance the semantic consistency. Experiments demonstrate that our GLC maintains high visual quality with less than 0.04 bpp on natural images and less than 0.01 bpp on facial images. On the CLIC2020 test set, we achieve the same FID as MS-ILLM with 45% fewer bits. Furthermore, the powerful generative latent space enables various applications built on our GLC pipeline, such as image restoration and style transfer. The code is available at this https URL.

Abstract (translated)

大多数现有的图像压缩方法在像素空间中进行变换编码,以减少其空间冗余。然而,在低比特率下实现高真实感和高保真度方面遇到了困难,因为像素空间中的失真可能与人类感知不一致。为了解决这个问题,我们引入了一种生成式潜在编码(GLC)架构,该架构在生成向量量化变分自编码器(VQ-VAE)的潜在空间中进行变换编码,而不是在像素空间中。生成式潜在空间以其更高的稀疏性、更丰富的语义以及更好的人类感知一致性为特点,这使得它更适合实现高真实感和高保真的压缩效果。 此外,我们还引入了一种类别超模块以减少超信息的比特成本,并通过基于代码预测的监督来增强语义一致性。实验表明,在自然图像上我们的GLC在小于0.04 bpp的情况下能够保持高质量视觉效果,在面部图像上则是在小于0.01 bpp的情况下实现这一点。在CLIC2020测试集上,我们与MS-ILLM相比,实现了相同的FID分数但比特率减少了45%。 此外,强大的生成式潜在空间使我们的GLC管道能够支持多种应用,例如图像修复和风格转换。代码可在以下链接获取:[此URL](请将“this https URL”替换为实际的GitHub或相关存储库链接)。

URL

https://arxiv.org/abs/2512.20194

PDF

https://arxiv.org/pdf/2512.20194.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot