Abstract
Head avatars animated by visual signals have gained popularity, particularly in cross-driving synthesis where the driver differs from the animated character, a challenging but highly practical approach. The recently presented MegaPortraits model has demonstrated state-of-the-art results in this domain. We conduct a deep examination and evaluation of this model, with a particular focus on its latent space for facial expression descriptors, and uncover several limitations with its ability to express intense face motions. To address these limitations, we propose substantial changes in both training pipeline and model architecture, to introduce our EMOPortraits model, where we: Enhance the model's capability to faithfully support intense, asymmetric face expressions, setting a new state-of-the-art result in the emotion transfer task, surpassing previous methods in both metrics and quality. Incorporate speech-driven mode to our model, achieving top-tier performance in audio-driven facial animation, making it possible to drive source identity through diverse modalities, including visual signal, audio, or a blend of both. We propose a novel multi-view video dataset featuring a wide range of intense and asymmetric facial expressions, filling the gap with absence of such data in existing datasets.
Abstract (translated)
翻译:由视觉信号生成的头部Avatar广受欢迎,尤其是在跨模态合成中,司机与动画角色不同,这是一种具有挑战性但非常实用的方法。最近提出的MegaPortraits模型在这个领域已经取得了最先进的结果。我们对这个模型进行深入的评估和审查,特别关注其面部表情描述器的潜在空间,并发现其表达强烈脸部动作的能力存在几个局限。为了应对这些限制,我们提出对训练过程和模型架构的大幅改进,以引入我们的EMOPortraits模型,该模型:提高模型在忠实支持强烈不对称面部表情方面的能力,在情感传递任务中实现了最先进的结果,超越了前方法和指标。将语音驱动模式融入我们的模型,在音频驱动面部动画中实现了卓越的性能,使通过多种方式驱动源身份成为可能,包括视觉信号、音频或二者的混合。我们提出了一个包含广泛强烈和不对称面部表情的多视角视频数据集,填补了现有数据集中缺少这种数据的空白。
URL
https://arxiv.org/abs/2404.19110