Abstract
While replacing Gaussian decoders with a conditional diffusion model enhances the perceptual quality of reconstructions in neural image compression, their lack of inductive bias for image data restricts their ability to achieve state-of-the-art perceptual levels. To address this limitation, we adopt a non-isotropic diffusion model at the decoder side. This model imposes an inductive bias aimed at distinguishing between frequency contents, thereby facilitating the generation of high-quality images. Moreover, our framework is equipped with a novel entropy model that accurately models the probability distribution of latent representation by exploiting spatio-channel correlations in latent space, while accelerating the entropy decoding step. This channel-wise entropy model leverages both local and global spatial contexts within each channel chunk. The global spatial context is built upon the Transformer, which is specifically designed for image compression tasks. The designed Transformer employs a Laplacian-shaped positional encoding, the learnable parameters of which are adaptively adjusted for each channel cluster. Our experiments demonstrate that our proposed framework yields better perceptual quality compared to cutting-edge generative-based codecs, and the proposed entropy model contributes to notable bitrate savings.
Abstract (translated)
在用条件扩散模型替换高斯解码器以提高神经图像压缩重建的感知质量的同时,它们的缺乏归纳偏见对于图像数据会限制其实现最优感知水平的能力。为了克服这一局限,我们在解码器端采用非均匀扩散模型。这个模型旨在通过区分频率内容来建立归纳偏见,从而促进高质量图像的生成。此外,我们的框架配备了一种新颖的熵模型,该模型通过利用潜在空间中的空间-通道关联精确建模了隐含表示的概率分布,同时加速熵解码步骤。这个通道层面的熵模型利用了每个通道块内的局部和全局空间上下文。全局空间上下文基于Transformer,这是专门为图像压缩任务而设计的。经过设计的Transformer采用了一个Laplacian形状的定位编码,其中可学习参数会根据每个通道簇进行自适应调整。我们的实验结果表明,与最先进的基于生成算法的压缩编码相比,我们所提出的框架具有更好的感知质量,并提出了一种有益的压缩比节省。
URL
https://arxiv.org/abs/2403.16258