Abstract
Recent advancements in video generation have been remarkable, yet many existing methods struggle with issues of consistency and poor text-video alignment. Moreover, the field lacks effective techniques for text-guided video inpainting, a stark contrast to the well-explored domain of text-guided image inpainting. To this end, this paper proposes a novel text-guided video inpainting model that achieves better consistency, controllability and compatibility. Specifically, we introduce a simple but efficient motion capture module to preserve motion consistency, and design an instance-aware region selection instead of a random region selection to obtain better textual controllability, and utilize a novel strategy to inject some personalized models into our CoCoCo model and thus obtain better model compatibility. Extensive experiments show that our model can generate high-quality video clips. Meanwhile, our model shows better motion consistency, textual controllability and model compatibility. More details are shown in [this http URL](this http URL).
Abstract (translated)
近年来在视频生成方面的进展令人印象深刻,然而,许多现有方法在一致性和文本-视频对齐方面存在问题。此外,该领域缺乏有效的文本指导视频修复技术,与文本指导图像修复领域已被充分探索的领域形成鲜明对比。为此,本文提出了一种新颖的文本指导视频修复模型,实现了更好的一致性、可控制性和兼容性。具体来说,我们引入了一个简单但高效的动作捕捉模块来保留运动一致性,并设计了一个实例感知区域选择,而不是随机区域选择,以获得更好的文本控制性,并利用一种新颖的方法将一些个性化的模型注入到我们的CoCoCo模型中,从而实现更好的模型兼容性。大量实验结果表明,我们的模型可以生成高质量的视频剪辑。同时,我们的模型在运动一致性、文本控制性和模型兼容性方面表现更好。更多细节可见于[http://www.thisurl.com](http://www.thisurl.com)。
URL
https://arxiv.org/abs/2403.12035