Abstract
In this study, we propose AniPortrait, a novel framework for generating high-quality animation driven by audio and a reference portrait image. Our methodology is divided into two stages. Initially, we extract 3D intermediate representations from audio and project them into a sequence of 2D facial landmarks. Subsequently, we employ a robust diffusion model, coupled with a motion module, to convert the landmark sequence into photorealistic and temporally consistent portrait animation. Experimental results demonstrate the superiority of AniPortrait in terms of facial naturalness, pose diversity, and visual quality, thereby offering an enhanced perceptual experience. Moreover, our methodology exhibits considerable potential in terms of flexibility and controllability, which can be effectively applied in areas such as facial motion editing or face reenactment. We release code and model weights at this https URL
Abstract (translated)
在这项研究中,我们提出了AniPortrait,一种基于音频和参考肖像图像生成高质量动画的新框架。我们的方法分为两个阶段。首先,我们从音频中提取3D中间表示,并将其投影为一系列2D面部关键点。随后,我们采用了一个鲁棒的扩散模型,与运动模块相结合,将关键点序列转换为逼真且时间一致的肖像动画。实验结果表明,AniPortrait在面部自然性、姿态多样性和视觉质量方面具有优越性,从而提供了更加逼真的感知体验。此外,我们的方法在灵活性和可控制性方面具有很大潜力,可以有效地应用于诸如面部运动编辑或面部复原等领域。代码和模型权重现在可以从该链接下载:
URL
https://arxiv.org/abs/2403.17694