Abstract
We propose a new architecture for distributed image compression from a group of distributed data sources. The proposed architecture, which we refer to as symmetric Encoder-Decoder Convolutional Recurrent Neural Network, is able to significantly outperform the state-of-the-art compression techniques such as JPEG on rate-distortion curves. We also show that by training distributed encoders and joint decoders on correlated data sources, the performance of compression is much better than that by training codecs separately. For 10 distributed sources, our distributed system remarkably performs within 2 dB peak signal-to-noise ratio (PSNR) of that of a single codec trained with all data sources. We experiment distributed sources with different correlations and show how our methodology well matches the Slepian-Wolf Theorem in Distributed Source Coding (DSC). Our method is also shown to be robust to the lack of presence of encoded data from a number of distributed sources. To our best knowledge, this is the first data-driven DSC framework for general distributed code design with Deep Learning.
Abstract (translated)
提出了一种新的分布式数据源图像压缩体系结构。该结构被我们称为对称编码器-解码器卷积循环神经网络,能够显著地优于最先进的压缩技术,如jpeg速率失真曲线。通过对相关数据源进行分布式编码器和联合译码器的训练,可以得到比单独训练译码器更好的压缩性能。对于10个分布式源,我们的分布式系统在2分贝的峰值信噪比(psnr)范围内的性能显著优于用所有数据源训练的单个编解码器。我们对具有不同相关性的分布式源进行了实验,并展示了我们的方法如何很好地匹配分布式源编码(DSC)中的slepian-wolf定理。我们的方法也证明了它对于缺乏来自多个分布式源的编码数据的存在是健壮的。据我们所知,这是第一个数据驱动的DSC框架,用于深入学习的通用分布式代码设计。
URL
https://arxiv.org/abs/1903.09887