Abstract
Large-scale text-to-video diffusion models have demonstrated an exceptional ability to synthesize diverse videos. However, due to the lack of extensive text-to-video datasets and the necessary computational resources for training, directly applying these models for video stylization remains difficult. Also, given that the noise addition process on the input content is random and destructive, fulfilling the style transfer task's content preservation criteria is challenging. This paper proposes a zero-shot video stylization method named Style-A-Video, which utilizes a generative pre-trained transformer with an image latent diffusion model to achieve a concise text-controlled video stylization. We improve the guidance condition in the denoising process, establishing a balance between artistic expression and structure preservation. Furthermore, to decrease inter-frame flicker and avoid the formation of additional artifacts, we employ a sampling optimization and a temporal consistency module. Extensive experiments show that we can attain superior content preservation and stylistic performance while incurring less consumption than previous solutions. Code will be available at this https URL.
Abstract (translated)
大规模的文本到视频扩散模型已经展示了合成多样化视频的出色能力。然而,由于缺乏广泛的文本到视频数据集以及必要的训练计算资源,直接应用这些模型进行视频风格化仍然非常困难。此外,由于输入内容中的噪声添加过程是随机且破坏性的,满足风格转移任务中的内容保留标准是一项挑战性的任务。本文提出了一种名为Style-A-Video的零样本视频风格化方法,该方法利用一个生成式预训练Transformer和一个图像隐态扩散模型来实现文本控制的视频中风格化。我们改进了去噪过程的指导条件,建立了艺术表达和结构保留的平衡。此外,为了减少帧间闪烁并避免形成额外的工效性,我们采用了采样优化和时间一致性模块。广泛的实验表明,我们可以在比先前解决方案消耗更少的资源下实现更好的内容保留和风格表现。代码将在这个httpsURL上可用。
URL
https://arxiv.org/abs/2305.05464