Abstract
Manually re-drawing an image in a certain artistic style takes a professional artist a long time. Doing this for a video sequence single-handedly is beyond imagination. We present two computational approaches that transfer the style from one image (for example, a painting) to a whole video sequence. In our first approach, we adapt to videos the original image style transfer technique by Gatys et al. based on energy minimization. We introduce new ways of initialization and new loss functions to generate consistent and stable stylized video sequences even in cases with large motion and strong occlusion. Our second approach formulates video stylization as a learning problem. We propose a deep network architecture and training procedures that allow us to stylize arbitrary-length videos in a consistent and stable way, and nearly in real time. We show that the proposed methods clearly outperform simpler baselines both qualitatively and quantitatively. Finally, we propose a way to adapt these approaches also to 360 degree images and videos as they emerge with recent virtual reality hardware.
Abstract (translated)
以某种艺术风格手绘重新绘制图像需要专业艺术家很长时间。单手操作视频序列是无法想象的。我们提出了两种计算方法,将样式从一个图像(例如,绘画)转移到整个视频序列。在我们的第一种方法中,我们将视频适应Gatys等人的原始图像样式转换技术。基于能量最小化。我们引入了新的初始化方法和新的损失函数,即使在具有大运动和强遮挡的情况下,也能生成一致且稳定的程式化视频序列。我们的第二种方法将视频风格化作为学习问题。我们提出了深入的网络架构和培训程序,使我们能够以一致和稳定的方式,几乎实时地对任意长度的视频进行风格化。我们表明,所提出的方法在质量和数量上明显优于更简单的基线。最后,我们提出了一种方法,使这些方法适应360度图像和视频,因为它们出现在最近的虚拟现实硬件中。
URL
https://arxiv.org/abs/1708.04538