Abstract
This paper proposes the novel task of video generation conditioned on a SINGLE semantic label map, which provides a good balance between flexibility and quality in the generation process. Different from typical end-to-end approaches, which model both scene content and dynamics in a single step, we propose to decompose this difficult task into two sub-problems. As current image generation methods do better than video generation in terms of detail, we synthesize high quality content by only generating the first frame. Then we animate the scene based on its semantic meaning to obtain the temporally coherent video, giving us excellent results overall. We employ a cVAE for predicting optical flow as a beneficial intermediate step to generate a video sequence conditioned on the initial single frame. A semantic label map is integrated into the flow prediction module to achieve major improvements in the image-to-video generation process. Extensive experiments on the Cityscapes dataset show that our method outperforms all competing methods.
Abstract (translated)
本文提出了一种基于单语义标签图的视频生成新任务,它在生成过程中提供了良好的灵活性和质量之间的平衡。不同于典型的端到端的场景内容模型和动态模型,我们提出将这一困难的任务分解为两个子问题。由于目前的图像生成方法在细节上优于视频生成,因此我们仅通过生成第一帧来合成高质量的内容。然后根据场景的语义对其进行动画处理,得到时间连贯的视频,整体效果良好。我们利用CVAE预测光流作为一个有利的中间步骤,生成一个在初始单帧条件下的视频序列。将语义标签映射集成到流预测模块中,以实现图像到视频生成过程的重大改进。对城市景观数据集的大量实验表明,我们的方法优于所有竞争方法。
URL
https://arxiv.org/abs/1903.04480