Abstract
Deep learning-based video compression is a challenging task, and many previous state-of-the-art learning-based video codecs use optical flows to exploit the temporal correlation between successive frames and then compress the residual error. Although these two-stage models are end-to-end optimized, the epistemic uncertainty in the motion estimation and the aleatoric uncertainty from the quantization operation lead to errors in the intermediate representations and introduce artifacts in the reconstructed frames. This inherent flaw limits the potential for higher bit rate savings. To address this issue, we propose an uncertainty-aware video compression model that can effectively capture the predictive uncertainty with deep ensembles. Additionally, we introduce an ensemble-aware loss to encourage the diversity among ensemble members and investigate the benefits of incorporating adversarial training in the video compression task. Experimental results on 1080p sequences show that our model can effectively save bits by more than 20% compared to DVC Pro.
Abstract (translated)
基于深度学习的视频压缩是一个具有挑战性的任务,许多先前的基于学习的视频压缩码系使用光流来利用连续帧之间的时间相关性,然后压缩残余误差。尽管这两种二级模型都是端到端优化的,但运动估计的知理不确定性和量化操作引起的 aleatoric 不确定性导致中间表示中的误差,并引入了重构帧中的伪影。这种固有缺陷限制了高比特率节省的潜力。为了解决这个问题,我们提出了一个具有不确定性的视频压缩模型,可以有效地捕捉深度学习模型的预测不确定性。此外,我们还引入了一个元学习损失,以鼓励集合成员之间的多样性,并研究在视频压缩任务中引入对抗训练的潜力。在 1080p 序列上的实验结果表明,我们的模型能够比 DVC Pro 节省超过 20%的比特数。
URL
https://arxiv.org/abs/2403.19158