Abstract
Generating photo-realistic video portrait with arbitrary speech audio is a crucial problem in film-making and virtual reality. Recently, several works explore the usage of neural radiance field in this task to improve 3D realness and image fidelity. However, the generalizability of previous NeRF-based methods to out-of-domain audio is limited by the small scale of training data. In this work, we propose GeneFace, a generalized and high-fidelity NeRF-based talking face generation method, which can generate natural results corresponding to various out-of-domain audio. Specifically, we learn a variaitional motion generator on a large lip-reading corpus, and introduce a domain adaptative post-net to calibrate the result. Moreover, we learn a NeRF-based renderer conditioned on the predicted facial motion. A head-aware torso-NeRF is proposed to eliminate the head-torso separation problem. Extensive experiments show that our method achieves more generalized and high-fidelity talking face generation compared to previous methods.
Abstract (translated)
生成具有任意语音音频的逼真视频肖像是电影制作和虚拟现实中的一个关键问题。最近,几项工作探索了在这个任务中利用神经网络发光场来提高3D真实感和图像质量。然而,以前基于NeRF的方法对于跨域音频的通用性受到限制,因为训练数据的规模很小。在这个项目中,我们提出了GeneFace,一个泛化高保真的NeRF-based语音面部生成方法,可以生成与各种跨域音频相应的自然结果。具体来说,我们在一个大规模的口部阅读语料库中学习了一系列可变的运动生成器,并引入了一个跨域适应的后处理网络来校准结果。此外,我们还学习了基于预测的面部运动NeRF渲染器。我们提出了一个头意识到身量的NeRF模型,以消除头身分离问题。广泛的实验结果表明,我们的方法比以前的方法实现了更通用和高保真的语音面部生成。
URL
https://arxiv.org/abs/2301.13430