Abstract
We propose VLOGGER, a method for audio-driven human video generation from a single input image of a person, which builds on the success of recent generative diffusion models. Our method consists of 1) a stochastic human-to-3d-motion diffusion model, and 2) a novel diffusion-based architecture that augments text-to-image models with both spatial and temporal controls. This supports the generation of high quality video of variable length, easily controllable through high-level representations of human faces and bodies. In contrast to previous work, our method does not require training for each person, does not rely on face detection and cropping, generates the complete image (not just the face or the lips), and considers a broad spectrum of scenarios (e.g. visible torso or diverse subject identities) that are critical to correctly synthesize humans who communicate. We also curate MENTOR, a new and diverse dataset with 3d pose and expression annotations, one order of magnitude larger than previous ones (800,000 identities) and with dynamic gestures, on which we train and ablate our main technical contributions. VLOGGER outperforms state-of-the-art methods in three public benchmarks, considering image quality, identity preservation and temporal consistency while also generating upper-body gestures. We analyze the performance of VLOGGER with respect to multiple diversity metrics, showing that our architectural choices and the use of MENTOR benefit training a fair and unbiased model at scale. Finally we show applications in video editing and personalization.
Abstract (translated)
我们提出了VLOGGER方法,一种从单个输入图像的人类视频生成方法,该方法在最近的成功生成扩散模型的基础上进行了改进。VLOGGER方法由两部分组成:1)一个随机的人类到3D运动扩散模型;2)一个新型的扩散基于架构,它通过空间和时间控制来增强文本到图像模型。这支持生成高质量的视频,具有可变长度,并且可以通过高级人脸和身体表示来轻松控制。与之前的工作相比,我们的方法不需要为每个人进行训练,不依赖于人脸检测和裁剪,可以生成完整的图像(不仅是脸或嘴唇),并考虑了广泛的场景(例如可见的躯干或多样主体身份),这些场景对于正确合成交流中的人是至关重要的。我们还策划了MENTOR,一个具有3D姿势和表情注释的新颖而多样的大数据集,比之前的大一倍(800,000个身份),并支持动态手势,我们在其中训练和消融我们的主要技术贡献。VLOGGER在三个公共基准测试中的表现优于最先进的方法,同时考虑了图像质量、身份保留和时间一致性。我们还展示了VLOGGER在视频编辑和个性化方面的应用。
URL
https://arxiv.org/abs/2403.08764