Abstract
Video object removal is a challenging task in video processing that often requires massive human efforts. Given the mask of the foreground object in each frame, the goal is to complete (inpaint) the object region and generate a video without the target object. While recently deep learning based methods have achieved great success on the image inpainting task, they often lead to inconsistent results between frames when applied to videos. In this work, we propose a novel learning-based Video Object Removal Network (VORNet) to solve the video object removal task in a spatio-temporally consistent manner, by combining the optical flow warping and image-based inpainting model. Experiments are done on our Synthesized Video Object Removal (SVOR) dataset based on the YouTube-VOS video segmentation dataset, and both the objective and subjective evaluation demonstrate that our VORNet generates more spatially and temporally consistent videos compared with existing methods.
Abstract (translated)
视频对象去除是视频处理中的一项具有挑战性的任务,往往需要大量的人工努力。给定每帧前景对象的遮罩,目标是完成(绘制)对象区域并生成没有目标对象的视频。虽然最近基于深度学习的方法在图像修复任务上取得了巨大的成功,但在应用于视频时,它们往往会导致帧之间的结果不一致。本文提出了一种新的基于学习的视频对象去除网络(VORNET),将光流翘曲和基于图像的修复模型相结合,以时空一致的方式解决视频对象去除任务。在基于YouTube视频分割数据集的合成视频对象去除(SVOR)数据集上进行了实验,客观和主观评价表明,与现有方法相比,我们的VORNET生成的视频在空间和时间上更加一致。
URL
https://arxiv.org/abs/1904.06726