Abstract
Generating realistic talking faces is an interesting and long-standing topic in the field of computer vision. Although significant progress has been made, it is still challenging to generate high-quality dynamic faces with personalized details. This is mainly due to the inability of the general model to represent personalized details and the generalization problem to unseen controllable parameters. In this work, we propose Myportrait, a simple, general, and flexible framework for neural portrait generation. We incorporate personalized prior in a monocular video and morphable prior in 3D face morphable space for generating personalized details under novel controllable parameters. Our proposed framework supports both video-driven and audio-driven face animation given a monocular video of a single person. Distinguished by whether the test data is sent to training or not, our method provides a real-time online version and a high-quality offline version. Comprehensive experiments in various metrics demonstrate the superior performance of our method over the state-of-the-art methods. The code will be publicly available.
Abstract (translated)
生成逼真的对话脸是一个有趣且长期存在于计算机视觉领域的课题。尽管已经取得了很大的进展,但生成具有个性化细节的高质量动态脸仍然具有挑战性。这主要是因为通用模型无法表示个性化的细节,以及对于未见过的可控制参数的泛化问题。在本文中,我们提出Myportrait,一个简单、通用、灵活的神经肖像生成框架。我们在单目视频上引入个性化的先验,并在3D人脸变形空间中使用可控制参数生成个性化的细节。我们提出的框架支持单目视频驱动和音频驱动人脸动画,给定单目视频,可以提供实时在线版本和高质量离线版本。各种指标的全面实验证明,我们的方法在现有方法中具有卓越的性能。代码将公开可用。
URL
https://arxiv.org/abs/2312.02703