Abstract
Traditional approaches for color propagation in videos rely on some form of matching between consecutive video frames. Using appearance descriptors, colors are then propagated both spatially and temporally. These methods, however, are computationally expensive and do not take advantage of semantic information of the scene. In this work we propose a deep learning framework for color propagation that combines a local strategy, to propagate colors frame-by-frame ensuring temporal stability, and a global strategy, using semantics for color propagation within a longer range. Our evaluation shows the superiority of our strategy over existing video and image color propagation methods as well as neural photo-realistic style transfer approaches.
Abstract (translated)
用于视频中的颜色传播的传统方法依赖于连续视频帧之间的某种形式的匹配。然后,使用外观描述符,颜色在空间和时间上传播。然而,这些方法在计算上是昂贵的并且不利用场景的语义信息。在这项工作中,我们提出了一种用于颜色传播的深度学习框架,它结合了局部策略,逐帧传播颜色以确保时间稳定性,以及全局策略,使用语义进行更长范围内的颜色传播。我们的评估显示了我们的策略优于现有的视频和图像颜色传播方法以及神经照片般逼真的样式传输方法。
URL
https://arxiv.org/abs/1808.03232