Abstract
Speech-driven facial animation methods usually contain two main classes, 3D and 2D talking face, both of which attract considerable research attention in recent years. However, to the best of our knowledge, the research on 3D talking face does not go deeper as 2D talking face, in the aspect of lip-synchronization (lip-sync) and speech perception. To mind the gap between the two sub-fields, we propose a learning framework named Learn2Talk, which can construct a better 3D talking face network by exploiting two expertise points from the field of 2D talking face. Firstly, inspired by the audio-video sync network, a 3D sync-lip expert model is devised for the pursuit of lip-sync between audio and 3D facial motion. Secondly, a teacher model selected from 2D talking face methods is used to guide the training of the audio-to-3D motions regression network to yield more 3D vertex accuracy. Extensive experiments show the advantages of the proposed framework in terms of lip-sync, vertex accuracy and speech perception, compared with state-of-the-arts. Finally, we show two applications of the proposed framework: audio-visual speech recognition and speech-driven 3D Gaussian Splatting based avatar animation.
Abstract (translated)
演讲驱动的面部动画方法通常包含两个主要类别:3D和2D对话面部,这两者在近年来都吸引了相当大的研究关注。然而,据我们所知,在3D对话面部的研究中,深度并没有达到2D对话面部的水平,尤其是在同步( lipsynchronization)和语音感知方面。为了弥补这两个子领域的差距,我们提出了一个名为Learn2Talk的学习框架,通过利用2D对话面部的两个专业领域来构建更好的3D对话面部网络。首先,受到音频-视频同步网络的启发,设计了一个3D sync-lip专家模型,以实现音频和3D面部运动的同步。其次,从2D对话面部方法中选择一个教师模型,用于指导音频-到-3D运动回归网络的训练,以实现更高精度的3D顶点准确性。大量的实验结果表明,与最先进的水平相比,所提出的框架在同步、顶点准确性和语音感知方面具有优势。最后,我们展示了两个基于所提出框架的应用:音频-视频语音识别和基于语音的3D高斯平铺基于虚拟角色动画。
URL
https://arxiv.org/abs/2404.12888