Abstract
This paper proposes a novel Non-Local Attention Optimized Deep Image Compression (NLAIC) framework, which is built on top of the popular variational auto-encoder (VAE) structure. Our NLAIC framework embeds non-local operations in the encoders and decoders for both image and latent feature probability information (known as hyperprior) to capture both local and global correlations, and apply attention mechanism to generate masks that are used to weigh the features for the image and hyperprior, which implicitly adapt bit allocation for different features based on their importance. Furthermore, both hyperpriors and spatial-channel neighbors of the latent features are used to improve entropy coding. The proposed model outperforms the existing methods on Kodak dataset, including learned (e.g., Balle2019, Balle2018) and conventional (e.g., BPG, JPEG2000, JPEG) image compression methods, for both PSNR and MS-SSIM distortion metrics.
Abstract (translated)
本文在现有的变分自动编码器(VAE)结构的基础上,提出了一种新的非局部注意力优化深图像压缩(NLAIC)框架。我们的NLAIC框架将非局部操作嵌入图像和潜在特征概率信息(称为超先验)的编码器和解码器中,以捕获局部和全局相关性,并应用注意机制生成用于衡量图像和超先验特征的掩模,隐式地适应位allo。根据不同功能的重要性。此外,利用潜在特征的超先验和空间信道邻域来改进熵编码。对于PSNR和MS-SSIM失真度量,该模型优于现有的柯达数据集方法,包括Learned(如Balle2019、Balle2018)和传统(如BPG、JPEG2000、JPEG)图像压缩方法。
URL
https://arxiv.org/abs/1904.09757