Abstract
In this paper, we present a new inpainting framework for recovering missing regions of video frames. Compared with image inpainting, performing this task on video presents new challenges such as how to preserving temporal consistency and spatial details, as well as how to handle arbitrary input video size and length fast and efficiently. Towards this end, we propose a novel deep learning architecture which incorporates ConvLSTM and optical flow for modeling the spatial-temporal consistency in videos. It also saves much computational resource such that our method can handle videos with larger frame size and arbitrary length streamingly in real-time. Furthermore, to generate an accurate optical flow from corrupted frames, we propose a robust flow generation module, where two sources of flows are fed and a flow blending network is trained to fuse them. We conduct extensive experiments to evaluate our method in various scenarios and different datasets, both qualitatively and quantitatively. The experimental results demonstrate the superior of our method compared with the state-of-the-art inpainting approaches.
Abstract (translated)
本文提出了一种新的视频帧缺失区域恢复的修复框架。与图像修复相比,在视频上执行此任务带来了新的挑战,如如何保持时间一致性和空间细节,以及如何快速有效地处理任意输入视频大小和长度。为此,我们提出了一种新的深度学习架构,该架构结合了convlstm和光流,以模拟视频的时空一致性。同时节省了大量的计算资源,使得该方法能够实时处理大帧、任意长度的视频。此外,为了从损坏的帧中产生准确的光流,我们提出了一个鲁棒的流生成模块,其中两个流源被馈送,并训练了一个流混合网络来融合它们。我们进行了大量的实验来评估我们的方法在不同的场景和不同的数据集,无论是定性的还是定量的。实验结果表明,与目前最先进的修补方法相比,本方法具有优越性。
URL
https://arxiv.org/abs/1905.02882