Abstract
In this paper, we consider a novel and practical case for talking face video generation. Specifically, we focus on the scenarios involving multi-people interactions, where the talking context, such as audience or surroundings, is present. In these situations, the video generation should take the context into consideration in order to generate video content naturally aligned with driving audios and spatially coherent to the context. To achieve this, we provide a two-stage and cross-modal controllable video generation pipeline, taking facial landmarks as an explicit and compact control signal to bridge the driving audio, talking context and generated videos. Inside this pipeline, we devise a 3D video diffusion model, allowing for efficient contort of both spatial conditions (landmarks and context video), as well as audio condition for temporally coherent generation. The experimental results verify the advantage of the proposed method over other baselines in terms of audio-video synchronization, video fidelity and frame consistency.
Abstract (translated)
在本文中,我们考虑了一种新的且实用的谈话面部视频生成案例。具体来说,我们关注涉及多人群互动的情况,其中谈话上下文(如观众或环境)是存在的。在这些情况下,视频生成应考虑上下文以生成与驾驶音频自然同步的视频内容,并使其在空间上与上下文一致。为了实现这一目标,我们提供了两个阶段的跨模态可控制视频生成管道,利用面部关键点作为显式且紧凑的控制信号来桥接驾驶音频、谈话上下文和生成视频。在这个管道中,我们设计了一个3D视频扩散模型,允许对空间条件(关键点和上下文视频)进行有效的弯曲,以及对音频条件的时域一致生成。实验结果证实了与基线相比,所提出方法在音频-视频同步性、视频质量和帧一致性方面的优势。
URL
https://arxiv.org/abs/2402.18092