Abstract
Recent years have witnessed great progress in creating vivid audio-driven portraits from monocular videos. However, how to seamlessly adapt the created video avatars to other scenarios with different backgrounds and lighting conditions remains unsolved. On the other hand, existing relighting studies mostly rely on dynamically lighted or multi-view data, which are too expensive for creating video portraits. To bridge this gap, we propose ReliTalk, a novel framework for relightable audio-driven talking portrait generation from monocular videos. Our key insight is to decompose the portrait's reflectance from implicitly learned audio-driven facial normals and images. Specifically, we involve 3D facial priors derived from audio features to predict delicate normal maps through implicit functions. These initially predicted normals then take a crucial part in reflectance decomposition by dynamically estimating the lighting condition of the given video. Moreover, the stereoscopic face representation is refined using the identity-consistent loss under simulated multiple lighting conditions, addressing the ill-posed problem caused by limited views available from a single monocular video. Extensive experiments validate the superiority of our proposed framework on both real and synthetic datasets. Our code is released in this https URL.
Abstract (translated)
近年来,在从单目视频创建生动音频驱动肖像方面取得了巨大的进展。然而,如何无缝地将创建的视频Avatar适应其他背景和照明条件不同的场景仍然未解决。另一方面,现有的照明研究大多依赖于动态照明或多视图数据,这些对于创建视频肖像来说太贵了。为了解决这个问题,我们提出了ReliTalk,一个可以从单目视频创建可照明音频驱动对话肖像的新框架。我们的关键发现是分解肖像的反射从 implicitly learned 音频驱动面部正常和图像。具体来说,我们涉及从音频特征得出的3D面部先验以通过隐含函数预测脆弱的面部映射。这些起初预测的面部正常随后通过动态估计给定视频的照明条件来关键地参与反射分解。此外,使用模拟多种照明条件相同的损失改进立体面部表示,解决了由单目视频有限视角带来的困难问题。广泛的实验验证了我们提出的框架在真实和合成数据集上的优越性。我们的代码在此httpsURL上发布。
URL
https://arxiv.org/abs/2309.02434