Abstract
Speech-driven 3D face animation aims to generate realistic facial expressions that match the speech content and emotion. However, existing methods often neglect emotional facial expressions or fail to disentangle them from speech content. To address this issue, this paper proposes an end-to-end neural network to disentangle different emotions in speech so as to generate rich 3D facial expressions. Specifically, we introduce the emotion disentangling encoder (EDE) to disentangle the emotion and content in the speech by cross-reconstructed speech signals with different emotion labels. Then an emotion-guided feature fusion decoder is employed to generate a 3D talking face with enhanced emotion. The decoder is driven by the disentangled identity, emotional, and content embeddings so as to generate controllable personal and emotional styles. Finally, considering the scarcity of the 3D emotional talking face data, we resort to the supervision of facial blendshapes, which enables the reconstruction of plausible 3D faces from 2D emotional data, and contribute a large-scale 3D emotional talking face dataset (3D-ETF) to train the network. Our experiments and user studies demonstrate that our approach outperforms state-of-the-art methods and exhibits more diverse facial movements. We recommend watching the supplementary video: this https URL
Abstract (translated)
语音驱动的3D人脸动画旨在生成与现实相符的面部表达方式,符合语音内容和情感。然而,现有方法往往忽视了情感面部表达方式,或未能将它们从语音内容中分离。为了解决这一问题,本文提出了一种端到端神经网络,以分离语音中的不同情感,生成丰富的3D面部表达方式。具体来说,我们引入了情感分离编码器(EDE),通过交叉重建具有不同情感标签的语音信号,分离语音中的情感和内容。然后,我们使用情感引导特征融合解码器生成增强情感的3D说话人。解码器由分离的身份、情感和内容嵌入驱动,生成可控制的个人和情感风格。最后,考虑到3D情感说话人数据的稀缺性,我们采取了面部混合shape的监督,这使可以从2D情感数据中恢复出合理的3D面部形状,并贡献一个大规模的3D情感说话人数据集(3D-ETF)用于训练网络。我们的实验和用户研究表明,我们的方法优于现有方法,表现出更为多样化的面部运动。我们强烈推荐观看补充视频: this https URL
URL
https://arxiv.org/abs/2303.11089