Abstract
We present a deep-learning approach to restore a sequence of turbulence-distorted video frames from turbulent deformations and space-time varying blurs. Instead of requiring a massive training sample size in deep networks, we purpose a training strategy that is based on a new data augmentation method to model turbulence from a relatively small dataset. Then we introduce a subsampled method to enhance the restoration performance of the presented GAN model. The contributions of the paper is threefold: first, we introduce a simple but effective data augmentation algorithm to model the turbulence in real life for training in the deep network; Second, we firstly purpose the Wasserstein GAN combined with $\ell_1$ cost for successful restoration of turbulence-corrupted video sequence; Third, we combine the subsampling algorithm to filter out strongly corrupted frames to generate a video sequence with better quality.
Abstract (translated)
我们提出了一种深度学习方法,用于从湍流变形和时空变化模糊中恢复一系列湍流失真的视频帧。我们的目标不是在深层网络中需要大量的训练样本量,而是采用基于新数据增强方法的训练策略来对来自相对较小的数据集的湍流进行建模。然后我们介绍一种子采样方法,以增强所提出的GAN模型的恢复性能。本文的贡献有三个方面:首先,我们引入一种简单但有效的数据增强算法来模拟现实生活中的湍流,以便在深层网络中进行训练;其次,我们首先将Wasserstein GAN与$ \ ell_1 $成本结合起来,成功恢复湍流损坏的视频序列;第三,我们结合子采样算法来滤除强烈损坏的帧,以生成质量更好的视频序列。
URL
https://arxiv.org/abs/1807.04418